import facepalm;

Sometimes bugs can be particularly evasive, and today I had such a one.

A module in deep in our codebase was throwing an Error, but only in Mozilla’s Firefox browser.

The error was NS_ERROR_ILLEGAL_VALUE.

I did some quick DuckDuckGoing and found that the error occurs when a native DOM function in Firefox is passed a value of a type it does not expect.

The stack trace led back to this line in our application code:

1
2
3
4
5
6
const hit = find( cache, c => c.original === obj );
if ( hit ) {
return hit.copy;
}
// ...some time later...
return someUncachedObject;

“@-E$!&&@#”, I thought. “Why is lodash’s find() function passing a bad value to a native function?”

You see, I use lodash all the time. So much, in fact, that I made one fatal error in my diagnosis.

I assumed that because the find() function was defined, that lodash had indeed been imported.

How. Wrong. I. Was.

It turns out that window.find() is, in fact, a non-standard, but nevertheless nearly omnipresent function that is designed to search a DOM document for a particular string. And since any function attached to window is global, a missing import of the same name – say, a missing lodash/find import – would not raise any alarms. The code built. The code ran. And it ran without error in every browser but Firefox. Why?

The window.find() function expects a first argument of type String. In modern browsers other than Firefox (pretty much all the Chromium-based browsers), passing a non-String argument to window.find() will simply cause the function to return false. As you can see in the snippet above, though rendering the cache useless, the application nevertheless continued to work. In Firefox, however, window.find() will throw if its first argument is not of type String. Thus my bug.

I am ashamed to say how long it took me to realize lodash/find was not the function being called.

In the end I applied the great wisdom of Spock’s ancestors, and started considering things that could not possibly be the case, until it dawned on me that perhaps – just perhaps – find() was not what it appeared to be after all.



And a single import find from "lodash/find"; statement fixed the bug.

Fun with Homebrew casks

One of my favorite utilities for OSX is Homebrew, a package manager that lets you easily install programs from the terminal.

One of my favorite pastimes is thumbing through the Homebrew Cask recipes to find new programs and utilities to install. Some are pretty nifty, like Zotero which manages research bibliographies. Or Electric Sheep which harnesses the power of your “sleeping” devices to crowdsource digital artwork. Or Finicky, which lets you specify which of your web browsers you want to open specific links. (Maybe you use Brave for normal browsing but want to open all google.com links in Chrome.)

Unfortunately the Cask recipe files have no real descriptions, so I usually just fish through them and dig out the homepage links of any Cask recipe file that has an interesting name. It’s kind of like a digital treasure hunt.

To make things even more fun, I cloned the homebrew-cask repo and came up with a simple shell script that will randomly choose a recipe and open its homepage for me.

1
2
3
4
5
6
find ~/projects/homebrew-cask/Casks -type f | \
shuf -n 1 | \
xargs cat | \
ack homepage | \
awk '{print $2}' | \
xargs open

Joker Movie Review

I finally watched Joker last night.

It is dark. But the movie earns all of its praise. It really is an extremely well done film. Everything works: cinematography, acting, music. It is very well crafted.

There are several major intertwined themes running through the film. A lot of people have interpreted it in a lot of ways, and they aren’t wrong; there’s a lot to unpack.

The biggest theme is probably “rich, powerful people are extremely out of touch with average, or below average people”, which we see every day now (Epstein, Weinstein, Clintons, etc.). This is the big Wayne connection to Joker. The movie tips the whole Batman story on its head; you actually feel sympathy for Arthur Fleck, and while you don’t think the Waynes are necessarily “bad”, the film portrays them as very out of touch, and condescending to the plight of the less fortunate. It’s actually well done; I didn’t feel that I was being preached at, like “look at the bad rich people!”. The message was more, “see the chasm here – these people have no idea what it’s like to live in normal society”.

The second is probably the damage that single mothers can do to sons. The void of an absent father can have a shattering impact on a child. This was probably the most gut-wrenching part of the film for me, and I’m surprised the writers tackled it. Well done though.

The next is probably mental health, and how we deal with disturbed people in society. Although I’m not sure I’d call this a “theme” because it’s more of a given in the film, that society handles this poorly. It’s not making a statement about it per se, it’s just assuming it. But the portrayal is well done, and thought provoking.

A tangential theme is that people need some degree of power in their lives – to feel they control something – and when that is taken away, the hopelessness they experience can cause them to seek power in other, socially taboo ways (e.g., violence), not because they would have preferred that, but because they psychologically have no alternative. This theme is more subtle, but probably the biggest statement the movie makes. The progressive criticism (“this movie is just about white, violent, incels!”) targets this theme, because progressives don’t believe that white males can ever be powerless.

Overall I highly recommend the film. As I said, it earns all the praise it gets. It’s one of the few films I’d consider a modern masterpiece.

Book Review - The Mythical Man-Month

The Mythical Man-Month is one of those books that is, well, mythical in the circles to which it pertains – that is, the software engineering and computer science fields. It is mythical because many people have heard of it, most agree that it is “classic”, but not many remember exactly why. Perhaps they have never read it, only absorbed its ideas through hearsay. Or perhaps they did read it, but so long ago that its principles have been taken only in the tide of time.

Either way, I have finally finished the book in what I consider to be an unreasonable amount of time. It’s not overly long, or overly verbose, but I have a bad habit reading a little from a lot of books at the same time, which means I don’t finish a book for a while. I took notes as I went so that hopefully time will be more gracious to my mind when someone asks me, in the years to come, if I’ve read Frederick Brooks.

Widely considered the central theme of the book, Brooks’s Law, in summary, is that adding programmers to a late software project will not make it go faster, but rather slower. This was a pattern Brooks saw during his years as a manager in larger companies that needed many engineers to write software either for internal use, or eventually for sale as products. Managers assumed that the central problem of software development – why projects did not finish on time or on budget – was a mechanical one that could be resolved with mechanical solutions. Maybe there just wasn’t enough manpower. Or maybe the tooling was inferior, and retarded progress. Or maybe the wrong language had been chosen for the task. While all of these things can, and do, affect software engineering endeavors, Brooks’s major insight was that they were the accidents of software engineering, and not its essence; and that the essence is what causes projects to fail.

The essence of “systems programming” (as Brooks called it) is one of complexity – an irreducible complexity1 – that of itself cannot be fixed by mechanical solutions. This complexity arises from the fact that software development is a creative, human process. The engineer must, to write a program, conceptualize the problem space correctly and then use the tools at his disposal to create a solution. As projects grow, engineers are added, the consequence of which, as Brooks keenly observed, tends to make the project slower because it increases the number of communication pathways among team members (with every addition), and the conceptual foundation of the project becomes spread among many minds, in ways that are often fragmented and incorrect. This, Brooks argues, is the core problem, and the solution to the problem is to adapt to it rather than try to conquer it.

“Complexity is the business we are in, and complexity is what limits us.”

How does one adapt to the problem of conceptual complexity in software engineering? Brooks proposed a number of solutions.

Conceptual integrity and communication

Brooks proposed that the conceptual integrity of a project – the core ideas about what the problems are and the models used to represent those problems – are of primary importance and must be safeguarded. The most efficient way to ensure this happens is to reduce the responsibility of that integrity to one, or at most, a couple of individuals, that will be responsible for enforcing that conceptual integrity by vetting the work of other team members on the project. They become the source of conceptual truth.

Communication and team structure

Because complexity scales with the number of communication pathways in a team, Brooks proposed that “surgical teams” be used in most software projects. These teams will be composed of the conceptual guardian(s) (the “surgeon”), and as few people as possible to get the work done. These teams are part of an organization as a whole, however, and there is always a management structure with which they must integrate. The key to good management, according to Brooks, is to realize that management is about action and communication. The person at the top should rely on his subordinate program managers to take action when needed, and he should give them the authority to do so. He should never, ever demand action when reviewing a general status report, however, because this will debilitate his program managers, and move the decision making power further from the decisions that needs to be made. Project managers should be concerned almost exclusively with managing the lines of communication in the team, and not with making decisions at all. The whole process of pushing decision making “down” to the program manager is effective because it gives program managers a stake in the total authority of the company, and therefore preserves the total authority of the company.

“The purpose of organization is to reduce the amount of communication and coordination necessary…”

“…the center gains in real authority by delegating power, and the organization as a whole is happier and more prosperous.”

Complexity in code

Complexity can be addressed in the code itself by reducing the mental burden a programmer has to carry while implementing code that has conceptual integrity. In the first edition of Brook’s book, he insisted that all programmers on a team be familiar with all modules (or entities) within a software project. In Brooks’s mind, this was a good way to safeguard the integrity of the system, because everyone would have a working understanding of all code. In a subsequent edition of the book, he backtracked on this position, because it essentially suffered from the mental equivalent of the communication problem. Code changes over time; no programmer ever has a complete and accurate understanding of system because it is not static. Brooks eventually came around to a view promoted by Canadian engineer David Parnas:

“[David] Parnas argues strongly that the goal of everyone seeing everything is totally wrong; parts should be encapsulated so that no one needs to or is allowed to see the internals of any parts other than his own, but should see only the interfaces… [I initially proposed that] Parnas’s proposal is a recipe for disaster [but] I have been quite convinced otherwise by Parnas, and totally changed my mind.”

Information hiding, or encapsulation, allows a programmer to “use” or take advantage of, code, without having to know how it works internally, only how to ask it for something. The mental footprint of understanding an interface (the way to ask code for something) is orders of magnitude smaller than the mental footprint required to understand the implementation behind the interface. And interfaces don’t change nearly as often as implementations (in a well-designed system).

Side-effects (changes created by a piece of code that change things beyond that code, or even system), likewise, should all be identified, well understood, and encapsulated (or eliminated) to reduce the mental burden of worrying about tangential consequences to implementations, which are often causes for bugs, and project delay.

Documentation

Documentation is central to adapting to complexity. Documenting decisions made in a software project is part of fostering the creative process itself:

“…writing the decisions down is essential. Only when one writes do the gaps appear and the inconsistencies protrude. The act of writing turns out to require hundreds of mini-decisions, and it is the existence of these that distinguishes clear, exact policies from fuzzy ones.”

Documentation need not be overly verbose, however; although overly verbose documentation is better than no documentation, Brooks believed. And the documentation regarding technical decisions – design and implementation – should be as close to the code itself as possible (even within the same code files) to ensure the documentation will be maintained and updated as the code itself changes. The goal of documentation should be twofold: 1) to create an overview of the particular system concern the documentation addresses, and 2) to identify the purpose (the why) of the decisions made in regard to that concern. Documentation is not only for other programmers to read; it is often to benefit the original author as well.

“Even for the most private of programs, prose documentation is necessary, for memory will fail the user-author.”

Conclusion

There are many more things I could say about Mythical Man-Month. More subtle insights, longer expositions on points articulated above. But these points are the ones that stuck with me most, the ones I felt were immediately relevant to my own work and career. Mythical Man-Month is one part history, one part knowledge, and one part wisdom. It lets us know that great minds in past struggled with, devised solutions to, and retracted and revised solutions to, the problems that make programming a struggle of genuine creativity.

This last quote is taken from a critique of Brooks, one that he found to be excellent, and expresses the same conclusion (albeit maybe a bit more pessimistic) at which Brooks himself arrived: the central problem of software is complexity, not tools or processes, and that will never change.

“Turski, in his excellent response paper [to No Silver Bullet] at the IFIP Conference, said eloquently: ‘Of all misguided scientific endeavors, none are more pathetic than the search for the philosopher’s stone, a substance supposed to change base metals into gold. The supreme object of alchemy, ardently pursued by generations of researchers generously funded by secular and spiritual rulers, is an undiluted extract of wishful thinking, of the common assumption that things are as we would like them to be. It is a very human belief. It takes a lot of effort to accept the existence of insoluble problems. The wish to see a way out, against all odds, even when it is proven that it does not exist, is very, very strong. And most of us have a lot of sympathy for those courageous souls who try to achieve the impossible. And so it continues. Dissertations on squaring a circle are being written. Lotions to restore lost hair are concocted and sell well. Methods to improve software productivity are hatched and sell very well. All too often we are inclined to follow our own optimism (or exploit the hopes of our sponsors). All too often we are willing to disregard the voice of reason and heed the siren calls of panacea pushers.’”


  1. Not to be confused with the same term used by some creationists to defend their particular ideas about the origin of the universe.

Author’s note: I have removed the two articles I posted previously with extensive quotes from The Mythical Man-Month, because I did unsure whether the number of quotes posted constitute “fair use” or not, and I wish to respect the copyright holder’s interests. I have not been contacted by the copyright owner or received any kind of DMCA letter; this is entirely my own decision. I have used those quotes to formulate the article above.

Protip - return from exceptional conditions early

During a recent code interview, I noticed a React component with a render method written in the following (abbreviated) form,

1
2
3
4
5
6
7
8
9
10
11
12
render() {
return this.state.items.length > 0 ? (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
) : (
''
);
}

where ComponentWithLotsOfProps had at least a dozen props, some of which were not simple primitive values.

While there is nothing technically wrong with this render method, it could be better. It suffers from a few deficiencies.

First, ternaries are objectively difficult to read when they are not short. It is difficult to grok what the method actually produces because the whole ternary is returned, requiring the reader to do double work to find the “implicit” returns (there are two) rather than looking for the easily identifiable return keyword.

Second, one must read the entire method to know what gets returned if there are no items in state. Is it a component? Is it null? Is it an empty string? That is unknown until the whole method has been read.

Third, if additional conditions are required in future work to determine what will be rendered, they cannot easily be introduced in this method.

A better alternative is to omit the ternary, and explicitly return the exceptional condition values first.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
render() {
if (this.state.items.length === 0) {
return '';
}

return (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
);
}

Due to reduced nesting, this is far easier to read, and return values are also easily identifiable. If additional conditions must be evaluated in the future, modifying this method becomes much simpler:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
render() {
if (this.state.items.length === 0) {
return '';
}

if (this.state.items.length == 1) {
return (<SingleItemComponent item={this.state.items[0]} />);
}

return (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
);
}

As with most things in programming: the simpler, more explicit, the better.

Modern JavaScript tooling is too complicated? Hacker News

This post on Hacker News is worth the read, not only for OP’s posted content, but because of the follow-up comments (routinely the better part of Hacker News submission).

You know it’s time for popcorn when the thread starts like this:

…if only the tooling was too complicated, it would not be too bad. IMAO the entire front-end JS world is one big pile of MISERY, complicated is not the word or the problem at all.

Destructuring Reconsidered

While working with React for the last five months, I’ve noticed that React developers make extensive use of object destructuring, especially in function signatures. The more I use React the less I like this trend, and here are a few, short reasons why.

There are countless books by wise industry sages1 that discuss how to write good functions. Functions should do one thing, and one thing only; they should be named concisely; their parameters should be closely related; etc. My observation is that destructured function parameters tend to quickly lead to violations of these best practices.

First, destructuring function parameters encourages “grab bag” functions where the destructured parameters are unrelated to each other. From a practical point of view, it is the destructured properties of the actual parameters that are considered, mentally, as parameters to a function. At least, the signature of a destructured function reads as if they are:

1
function foo({ bar, baz }, buzz) {}

A developer will read this as if bar, baz, and buzz are the actual parameters of the function (you could re-write the function this way, so they might as well be), but this is incorrect; the real parameters are buzz and some other object, which, according to best practice should be related to buzz. But because the first parameter (param1) is destructured, we now have properties bar and baz which are one step removed from buzz, and therefore the relationship between param1 and buzz is obscured.

This can go one of three ways:

  1. if param1 and buzz are related, we do not know why;
  2. if param1 and buzz are not related (but bar and baz are related to buzz) then the function is poorly written;
  3. if bar, baz, param1, and buzz are all closely related, then the function is still poorly written, as it now has three “virtual parameters” instead of just two actual parameters.

Second, destructured functions encourage an excessive number of “virtual parameters”. For some reason developers think this function signature is well written:

1
2
function sendMail({ firstName, lastName, email}, { address1, city, state, zip}, { sendSnailMail }) {}
// function sendMail(user, address, mailPreferences) {}

“But it only has three parameters!”, they say. While technically true, the point of short function signatures is to scope the function to a single, tangible task and to reduce cognitive overhead. For all practical purposes this function has eight parameters. And while the purpose of this function is fairly obvious based on its name, less expressive functions are far more difficult to grok.

Third, destructuring makes refactoring difficult. Sure, our tools will catch up some day. But from what I’ve seen modern editors and IDEs cannot intelligently refactor a function signature with destructured parameters, especially in a dynamic/weak typed language like JavaScript. The IDE or editor would need to infer the parameters passed into the function by examining invocations elsewhere in code, and then infer the assignments to those parameters to determine which constructor function or object literal produced them, then rewrite the properties within those objects… and you can see how this is a near impossible feat. Or at the very least, how even the best IDEs and editors would introduce so many bugs in the process that the feature would be avoided anyway.

Fourth. Often developers must trace the invocation of a function to its definition. In my experience, code bases typically have many functions with the same name used in different contexts. Modern tools are smart, and examine function signatures to try and link definitions to invocations, but destructuring makes this process far more difficult. Given the following function definition, the invocations would all be valid (since JS functions are variadic), but if a code base had more than one function named foo, determining which invocation is linked to which definition is something of a special nightmare.

1
2
3
4
5
6
7
8
9
10
11
12
// in the main module
function foo({ bar, baz}, { bin }, { buzz }) {}

// in the bakery module
function foo(bar, { baz }) {}

// invocations
foo({ bar, baz });

foo(anObject, anotherObject);

foo(1, { bin }, null);

In contrast, functions with explicitly named parameters (usually the signature parameters are named the same as the variables and properties used to invoke the function) make these functions an order of magnitude easier to trace.

Fifth, destructured parameters obscure the interfaces of the objects to which they belong, leaving the developer clueless as to the related properties and methods on the actual parameter that might have use within the function. For example:

1
function handle({ code }) {}

What else, besides code may exist in the first parameter that will allow me to more adequately “handle” whatever it is that I’m handling? The implicit assumption here is that code will be all I ever need to do my job, but any developer will smirk knowingly at the naivety of that assumption. To get the information I need about this parameter I have to scour the documentation (hahahahaha documentation) in hopes that it reveals the actual parameter being passed (and doesn’t just document the destructured property), or manually log the parameter to figure out what other members it possesses. Which brings me to my last point:

Logging. I cannot count the number of times I have had to de-destructure a function parameter in order to log the complete object being passed to the function, because I needed to know some contextual information about that object. The same applies for debugging with breakpoints. (I love when Webpack has to rebuild my client code because I just wanted to see what actual parameter was passed to a function. Good times.)

Don’t get me wrong – I’m not completely against destructuring. I actually like it quite a bit when used in a way that does not obscure code, hinder development, or hamstring debugging. Personally I avoid destructuring function parameters in the signature, and instead destructure them on the first line of the function, if I want to alias properties with shorter variable names within the function.

1
2
3
4
5
6
function sendEmail(user, address, mailPreferences) {
const { firstName, lastName, email } = user;
const { address1, city, state, zip } = address;
const { sendSnailMail } = preferences;
//...
}

This pattern both conforms to best practices for defining functions, and also gives me a lightweight way to extract the bits of information I need from broader parameters, without making it painful to get additional information from those parameters if I need it.

Don’t use the new shiny just because it’s what all the cool kids do. Remember the wisdom that came before, because it came at a cost that we don’t want to pay again.


  1. Clean Code, Code Complete, etc.

Book Review - How to Watch TV News by Neil Postman

My grandparents were religious about watching the evening news. On the occasions that we visited them I recall the family gathering in the living room after dinner to watch the local half-hour news show, followed by the weather. All activity in the house ceased for those precious minutes, and all eyes were glued to the cathode ray tube’s mesmerizing colors.

Now, as an adult, I take for granted the never-ending stream of news available to me 24 hours a day, seven days a week, every day of the year – even holidays. Where my grandparents had a brief window to the wider world, I live outside the house entirely, to the point where I do not find it odd to worry about affairs in foreign countries that will never, ever affect my daily life. But for some reason I know about them, because they are news, and news – we are told – is important.

In 1992, cultural critic Neil Postman and journalist Steve Powers published a book called How to Watch TV News. I can’t say what reception the book received, but it was at least significant enough for an updated edition to appear in 2008. That a mere sixteen year gap warranted an updated edition is a testament to the speed with which technology has transformed the delivery and consumption of news itself. The first publication, however, deals with news as Postman and Powers – and myself, as a child – experienced it in the 80s and early 90s, largely delivered to homes in regular time slots from well-known networks.

“What is news?” Postman asks. (I shall refer to both Postman and Powers as “Postman” in the remainder of this review, only because I am more familiar with his work.) Most people would answer that news is the most important events that occur during the day. But many important events occur during the day, and the news only occupies a limited time slot on network television in a 24 hour period. (Even with modern technology, more significant events occur daily than could possibly be covered in 24 hours, even if the manpower and network bandwidth were available.) News, then, is a curated selection of events on which someone reports. And the person reporting is likely not the person who decides what events are covered, though they are the person responsible for interpreting events, and relaying that interpretation to viewers.

What criteria are used to determine which daily events are most newsworthy? To answer this question, Postman looks at how news networks make money. Advertisers spend dollars to place commercials on networks that attract eyes, so to be profitable – that is, to court advertisers – news programs need eyes. News stories, according to Postman, are selected based on which stories will get the most eyes to remain on the screen for the duration of the program. Compared to regular entertainment programs, news programs are relatively inexpensive to produce, but tend to attract significant viewership, so their profit margins are higher. Since audiences who watch news programs tend to be more educated, attentive, and have more money, they are more susceptible to clever advertising.

News networks employ a number of interesting strategies to keep audiences hooked during programs. Popular entertainment shows are often lead-in programs that already draw a significant audience, and which are likely to leave eyes lingering on the couch after they conclude. “Teasers” for upcoming news are peppered within these programs – “Stay tuned for a story of murder and mayhem, coming up at 5!” – the visual equivalent of “clickbait” designed to entice audiences to stay. Anchors and their supporting staff that cover weather, sports, etc. are all chosen with a view to their aesthetic in mind. Better looking people are paid more to fill a part – to be an actor – in the drama that is TV news. Together this ensemble forms a “family” metaphor: two co-anchors (of the opposite sex) that are “husband and wife”, and their subordinates who play the children. Viewers are brought into their happy home as guests. Everyone in the family has a role to play, and more importantly, everyone is happy to play that role. It surpasses Ward and June Cleaver’s family as the ideal.

These actors are paid large sums to deliver news to audiences, but reporters write and frame the stories anchors deliver according to their own mental points of origin. When covering events, reporters employ three types of language: description (what actually happened), judgement (how they feel, morally, about what happened), and inference (drawing conclusions about related ideas based on judgement). The very language with which reporters communicate news has connotative meaning that goes beyond the visuals that “show” the story to a viewer. In fact, pictures only speak to the concretes, the particulars, of reality. They do not deal in abstractions at all. Language is the means by which humans unify the infinite variety of particulars in the universe, enabling us to deal with it in meaningful (and sane) ways. So while viewers might think that the images or video being displayed on a nightly news program is “news”, it is in fact not – “news” is the reporter’s connotations about those images, laced, as they are, by a context that is often not conveyed. Remember, Postman tells us, that different people experience events in different (often contradictory ways). Eye witness testimony is of dubious reliability at best; and what is news, except the eyewitness testimony of a single reporter?

A news program’s time limitations place temporal restrictions on the quantity of information that can be squeezed into the program.

“Time works against understanding, coherence, even meaning.”

The more instantaneous information is delivered, the less historical context and analysis can be delivered with it. Shorter news segments mean that context is necessarily dropped in favor of more visually scintillating content. Reporters or anchors may make contextual comments, but they are usually passed off as errata to an otherwise complete visual work. The increasing number of retractions, updates, and corrections in more modern news stories proves Postman’s point. To competently watch the news, then, a viewer must be armed with context already – from books, articles, and other sources of information. The viewer must not be a passive vessel to be filled with news, but must be an active participant and critic of the news.

Postman’s final chapter contains eight recommendations for people who watch TV news. These recommendations stand the test of time, and they may be applied to news one receives from any visual medium, including the Internet (YouTube and Facebook, especially).

1 - “In encountering a news show, you must come with a firm idea of what is important.” A viewer must understand that news is delivered to the public based on the financial interests of the network. To paraphrase Postman, reporters are not as powerful as accountants. Viewers will only be as competent in their consumption of news as they have been diligent in the development of their own knowledge.

2 - “In preparing to watch a TV news show, keep in mind that it is called a ‘show’.” Teasers, soundtracks, fancy visuals, photogenic anchors – these are the things of entertainment, and they are calculated to affect viewers emotionally. TV news is drama larping as education.

3 - “Never underestimate the power of commercials.” Commercials, Postman writes, are “a new, albeit degraded means of religious expression in that most of them take the form of parables, teaching people what the good life consists of… that, in fact, is one of the reasons commercials are so effective. People do not usually analyze them. Neither, we might say, do people analyze biblical parables, which are often ambiguous…”

4 - “Learn something about the economic and political interests of those who run TV stations.” Since all news is chosen and delivered through a filter of values, to judge it competently a viewer must know something about those from whom it is delivered. In the 80s and 90s this may have been a more difficult task; in the twenty-first century it seems that all reporters wear political affiliations on their sleeves.

5 - “Pay special attention to the language of newscasts.” Language frames reality, and also betrays the biases and assumptions of the people using it. Since the purview of television news is to arrest the viewer for ratings (and hence, lure advertisers), it can be assumed that the language chosen to convey news will be calculated to provoke maximum emotional response, whether warranted or not. Perhaps this is why, Postman writes, “people who are heavy television viewers, including viewers of television news shows, believe their communities are much more dangerous than do light television viewers. Television news, in other words, tends to frighten people.” The more hysteria that can be packed into every sentence a reporter writes, the better. People love watching train wrecks.

6 - “Reduce by at least one-third the amount of TV news you watch.” The reasons, by now, should be obvious. Spend your freed time reading.

“…each day’s TV news consists, for the most part, of fifteen or so examples of one or the other of the Seven Deadly Sins… It cannot possibly do you any harm to excuse yourself each week from acquaintance with thirty or forty of these examples… TV news does not reflect normal, everyday life.”

7 - “Reduce by one-third the number of opinions you feel obliged to have.” One interesting side-effect of TV news is that it compels people to feel like they ought to parrot what has been reported, and that they are morally or intellectually inferior if they reserve judgement or admit to ignorance on a reported subject. But this is nonsense; insanity, even. No well informed insights can come from sound bytes and contextless reporting.

8 - “Do whatever you can to get schools interested in teaching children how to watch TV news shows.” Perhaps “critical viewing” could be taught alongside critical thinking in school classrooms. Students are certainly exposed to far more news than is appropriate for their happiness and well-being. We should consider it morally obligatory to equip them to deal with the deluge sooner rather than later.

Though dated, How to Watch TV News has a tremendous amount of insights for the consumption of any sort of visual media. The Internet has, by and large, taken the place of television in the twenty-first century, and the media establishment – of which the term “fake news” sticks like spaghetti to a wall – looses its collective shit daily. Information comes to us at a tremendously unhealthy rate, overwhelming the sense and clouding the mind, yet our intellectual and moral standings, our very identities, in fact, are judged according to which news source gains our allegiances. Perhaps news is not as important as we think it is. Perhaps it is more important to step back and ask what we should know, and why it’s important, before becoming a passive receptacle for someone else’s answers to those questions. Postman thinks so, and I agree.

Book Review - On the Meaning of Life by Will Durant

Would you know what to say to a total stranger who asked you to convince him not to commit suicide?

In 1930, that is the very situation that prompted historian Will Durant to ponder and write about the most profound question of all: what is the meaning of life? After ad libbing his own answer to a desperate soul whom he never saw again, he penned a letter to the foremost minds of his time, inquiring: “…what are the sources of your inspiration and your energy, what is the goal or motive-force of your toil, where you find your consolations and your happiness, where, in the last resort, your treasure lies?” Some responded, and in 1931 Durant compiled their letters in a short book, On the Meaning of Life.

Among his respondents were Mohandas Gandhi, H. L. Menken, Sinclair Lewis, Dr. Charles Mayo, George Bernard Shaw, Bertrand Russell, and others. Many replies contained contributory responses; some were terse and dismissing, but Durant reported each in good spirits and with the dignity to laugh at those who considered it beneath their time to be thorough with him.

Durant spends the first six chapters discussing why modern man is increasingly inclined to hopelessness and despair, leading to an annual increase in suicides. The old ways, the old sources of meaning – religion and tradition – had been relegated to myth and legend by scientists and historians. All the while man’s view of himself became more mechanistic, more deterministic, and the gains in knowledge, though dispelling false beliefs of the past, offered up no unifying system of hope and significance for newly untethered minds. The world seemed hopeless, Durant concluded, but there were many – he among them – who believed that lost hope is not necessarily a hopeless loss.

The replies are grouped into chapters based on the the overall characteristics that categorize the respondents:

  • the men of letters
  • entertainers, artists, scientists, and educators
  • the religionists
  • the women1
  • a prison convict serving a life sentence
  • the skeptics

Without spoiling the joy of reading each reply for yourself, I want to call your attention to several ideas that I think form the meat of the most articulate replies.

Some respondents found purpose in their work, but not just because they felt productive. They felt they were uniquely suited, by their own personalities and dispositions, to perform the tasks that ultimately fulfilled them. Meaning, for them, came from the knowledge that their best parts were being utilized in the best possible ways.

Another respondent pointed out that, regardless of how much we claim to know now, we hardly know everything. To conclude definitively that life is meaningless based on so little information is premature at best.

In the perspective of another, the desire for immortality is tied to our desire for meaning. We want to be part of something lasting. If immortality is real, and there is a life after this one, we will have the opportunity to experience this. But if not, even though we won’t live forever, we will never be conscious of not living. In our own minds, we will be, then we will be not; in either case, we should live as if immortal because practically, we are.

Finally, the longest and most touching reply came from a convict serving a life sentence in Sing Sing prison. I take the liberty of quoting a bit from it here:

“Truth is not beautiful, neither is it ugly. Why should it be either? Truth is truth, just as figures are figures. When a man wishes to learn the exact condition of his business affairs, he employs figures and, if these figures reveal a sad state of his affairs, he doesn’t condemn them and say that they are unlovely and accuse them of having disillusioned him. Why, then, condemn truth, when it only serves him in this enterprise of life as figures server him in his commercial enterprises? That idol-worshipping strain in our natures has visioned a figure of Truth draped in royal raiment and, when truth in its humble form, sans drapery, appears to us, we cry, ‘Disillusionment.’

Custom and tradition have caused us to confuse truth with our beliefs. Custom, tradition and our mode of living have led us to believe we cannot be happy, save under certain physical conditions possessed of certain material comforts. This is not truth, it is belief. Truth tells us that happiness is a state of mental contentment. Contentment can be found on a desert island, in a little town, or the tenements of a large city. It can be found in the palaces of the rich or the hovels of the poor.

Confinement in prison doesn’t cause unhappiness, else all those who are free would be happy. Poverty doesn’t cause it, else the rich all would be happy. Those who live and die in one small town are often as happy, or happier than many who spend their entire lives in travel… Happiness is neither racial, nor financial, nor social, neither is it geographical…

Reason tells us that it is a form of mental contentment and – if this be true – its logical abode must be within the mind.

The final chapter in his book contains Durant’s answers to his own questions, formulated in the same year after receiving “several letters [from others] announcing suicide”. His reply is titled “Letters to a Suicide” and is a beautiful call to find meaning within the very improbability of life itself; that we have it, and that it offers us actual joy and happiness is meaningful.

“Nature will destroy me, but she has a right to – she made me, and burned my senses with a thousand delights; she gave me all that she will take away. How shall I ever thank her sufficiently for these five senses of mine – these fingers and lips, these eyes and ears, this restless tongue and this gigantic nose?”

Overall I give the book 4/5 stars. Durant’s prose is, as ever, mind candy. The variety of responses in content, length, and depth – and their sources and historical context – give the reader much to think about and, surprisingly, don’t attempt to over-simplify or trivialize Durant’s questions. My only (minor) complaint regards the book’s length. It seems too short for such a complex subject, and I would have enjoyed, very much, additional material collected over a longer period of time. I cannot fault Durant though. Faced with the despair of suicidal strangers, I believe he pushed to collect the best answers in the most condensed form possible. The result is rich, and worth reading.


1 Recall that the year was 1931, and the role of woman was undergoing metamorphosis. That Durant devoted a chapter to women he greatly respected is notable. Durant was very eager to see women contribute to the “great conversation” of history. His wife Ariel, a co-author on many of Durant’s own works, shared this passion.

Creating Reusable Code

Create reusable software is challenging, especially when that software may be reused in situations or scenarios for which it may not necessarily have been designed. We’ve all had that meeting where a boss or manager asked the question: “What you’ve designed is great, but can we also use it here?”

In the last month I’ve had this exact experience, from which I’ve learned a number of valuable lessons about crafting reusable software.

eTexts and annotations

When I first started working for eNotes, my initial task was to fix some code related to electronic texts that we displayed on our site (e.g., Shakespeare, Poe, Twain, etc.). We have a significant collection of annotations for many texts, and those annotations were displayed to users when highlights in the text were clicked. A couple of years ago we spun this technology off into a separate product, Owl Eyes, with additional teacher tools and classroom management features. Because of my experience with the existing eText and annotation code, and because I am primarily responsible for front-end JavaScript, I was tasked with building a “Kindle-like” experience in the browser for these eTexts. (This is one of the highlights of my career. The work was hard, and the edge cases were many, but it works very well across devices, and has some pretty cool features.)

Filtering, serializing, and fetching annotation data

The teacher and classroom features introduced some additional challenges that were not present when the eText content was first hosted on enotes.com. First, classrooms had to be isolated from one another, meaning that if a teacher or student left an annotation in an eText for their classroom, it would not be visible to anyone outside the classroom. Also, a teacher needed the ability to duplicate annotations across classrooms if they taught multiple courses with the same eText. Eventually we introduced paid subscriptions for premium features, which made annotation visibility rules even more complicated. All Owl Eyes Official annotations are available for free, public viewing, but certain premium educator annotations are restricted to paid subscribers. (Also, students in a classroom taught by a teacher with a paid subscription are considered subscribers, but only within that classroom’s texts!) It was complicated.

We devised a strategy whereby a chain of composable rules could be applied to any set of annotations, to filter them by our business requirements. These rules implemented a simple, identical interface, and each could be passed as an argument to another to form aggregates. The filtered annotation data was then serialized as JSON and emitted onto the page server-side. When the reader renders in the client this data is deserialized and the client-side application script takes over.

The role that a given user possesses in the system often determines if they can see additional meta-data related to annotations, or whether they can perform certain actions on those annotations. These were communicated to the front-end to enable/disable features as needed, and then enforced on the back-end should a clever user attempt to subvert the limitations of his own role. To keep the data footprint as light as possible on the page, we developed a composable serialization scheme that could be applied to any entity in our application. The generic serialization classes break down an entity’s data into a JSON structure, while more specialized serialization classes add or remove data based on a user’s role and permissions. In this way a given annotation might contain meta-data of interest to teachers, but would exclude that meta-data for students. Additional information is added if the user is an administrator, to give them better control over the data on the front-end.

The end result is that, from a user’s perspective, the annotations visible to them, and the data within those annotations, are tailor-made to the user when the eText reader is opened.

Fast-forward to the present day. I have recently been tasked with bringing our eTexts and annotations full circle, back to enotes.com. We brainstormed about the best way to make this happen, as enotes.com lacks the full eText and annotation data, as well as the rich front-end reading experience.

We decided that since the eText and annotation data was already being serialized as JSON for client-side consumption in owleyes.org, it would be trivial to make that same data available via an API. I implemented a simple controller that made use of Symfony’s authentication mechanisms for authenticating signed requests via API key pair, and returned annotation JSON data in the exact same manner that would be used for rendering that data in the eText reader. On inspection, I realized that some of the annotation data wasn’t relevant to what we wanted to display on enotes.com, so I quickly created new serialization classes that made use of existing serialization classes, but plucked unwanted data from their generated JSON structures before returning it. No changes were necessary to the annotation filtering rules, as an API user is, from the ruleset’s perspective, a “public user”, and so would see the same annotation data that users who aren’t logged in on the site would see.

Fetching this data on enotes.com was a simple matter of using PHP’s CURL classes to request data from the owleyes.org endpoint.

The user interface

The eText reader JavaScript code on owleyes.org is complex; it is composed of many different modules – view modules, state modules, utility modules, messaging modules, etc. – that interact together to form a smooth, reading experience. It is far more interactive than the pages we wanted to display on enotes.com, so I initially worried that the code would not be entirely reusable because of its complexity.

I was pleasantly wrong.

When I write software I take great pains to decouple code, favor composition over inheritance, and observe clear, strict, and course API boundaries in my modules and classes. I, as every programmer does, have a particular “style” of programming – the way I think about and model problems – which, in this case, served me very well.

I copied modules from the owleyes.org codebase into the enotes.com codebase that I knew would be necessary for the new eText pages to function. With some minor adjustments (mostly related to DOM element identifiers and classes) the code worked almost flawlessly. Where I needed to introduce new code (we’re using a popup to display annotations in enotes.com, whereas in owleyes.org we use a footer “flyout” that cycles through annotations in a carousel) the APIs in existing code were so well defined that I was able to adapt to them with few issues. Where differing page behavior was desired (e.g., the annotation popup shifts below the annotation when it gets too close to the top of the screen as the reader scrolls, and above otherwise) the decoupled utility modules that track window and page state already provided me with the events and information I needed to painlessly implement those behaviors. And because the schema of the serialized annotation data delivered over the API was identical to the JSON data embedded in the owleyes.org reader, the modules that filtered, sorted, and otherwise manipulated that data did not change at all.

Why it worked

Needless to say, this project left me very satisfied as a developer. When your code is painlessly reused in other contexts it means you’ve something right. I’ve made some observation about what made this reuse possible.

First, reusable code should model a problem, or a system, in such a way that the constituent components of that model can act together, or be used in isolation, without affecting the other parts of the model. Modules, classes, and functions are the tangible building blocks we use to express these models in software, and they should correspond with the way we think about these models in our heads. Each should be named appropriately, corresponding to some concept in the model, and the connections between them should be well understood and obvious. For example, in the eText reader, a tooltip is a highlighted portion of text that may be clicked on to display an annotation popup, which displays annotation information. The tooltip and annotation popup are components in the visual model; they are named appropriately, and the relationship between them is one-way, from tooltip to popup.

Second, a given problem may in fact be composed of multiple models that are being run at the same time. Modules that control the UI are part of the visual or display model; modules that control the access to, and filtering of, data are part of the domain model. Modules track mouse movements, or enable/disable features based on user interaction, are part of the interaction model. Within these models, objects or modules should only perform work that makes sense within the purpose of the model. Objects in the visual model should not apply business rules to data, for example. When one or more objects exhibit behaviors from multiple models, extracting and encapsulating the behavior that is not part of each object’s primary model makes that object more reusable.

Third, objects within a model should have well-defined, coarse APIs. (In the context of objects, an API is an object’s “public” methods to outside callers, or to the objects that extend it.) A coarse API is one that provides the least amount of functionality that its responsibilities require. Yes, the least. An object either stands alone, or makes use of other objects to do its work. If the methods on an object are numerous the object can likely be broken down into several smaller objects to which it will delegate and on which it will depend to do its work internally. Ask: what abstraction does this object represent, and which methods fulfill that abstraction. Likewise the parameters to an object’s methods can often be reduced by passing known state to the object’s constructor (or factory function, or whatever means are used to create the object). This chains the behavior of the object to a predetermined state – all remaining method arguments are only augmentations to this state. If the state needs to change, another object of the same type, with different state, is created and used in its stead. The API is coarse because the methods are few, and their parameters are sparse.

Fourth an object’s state should be stable at all times. Its initial state should be set, completely, through the object’s source of construction (whether by data provided via parameters, or sensible defaults, or both). Properties on objects should be considered read-only, as they represent a “window” into the object’s state. Computed properties should be calculated whenever an object’s relevant internal state changes, usually the result of a method invocation. I avoid exposing objects that can be manipulated by reference through properties; properties are always primatves that can be re-produced or re-calculated, or collections of other “data” objects that have the same characteristics (usually cloned or reduced from some other source). If an object needs to expose information from one of its internal children, I copy that information from the internal source to a primitive property on the external object itself. If the information is itself in the form of an object with multiple properties, I flatten those into individual properties on the external object. The end result is that an object’s state is always generated internally, as a consequence of method invocations, and cannot be manipulated externally, except by way of its public API (methods).

Finally, shared data should exist in “bags” – objects that jealously guard data and only deliver data by value to callers when asked. For example, on owleyes.org a given chapter in Hamlet may contain hundreds of annotations. Annotations may be crated, edited, deleted, and receive replies in client code. The annotation bag is responsible for holding the annotation data and delivering it, in read-only format, to other modules as requested so that they can render themselves (or perform computations) accordingly. When an annotation changes – when an owleyes.org PUT request is sent to the API and a successful response is received – a method on the bag is invoked to update the annotation. Because annotations are only fetched by value, it does no good for the module that initiated the update to directly manipulate the properties on its own annotation object. No other module will receive the change. Instead, the responsible module tells the bag to update the annotation by passing it the new annotation deserialized from the API response. The bag replaces the annotation in its internal collection and then raises an event to notify listening modules that the given annotation has changed. Any module interested in that annotation – or all annotations – then requests the updated data (in read-only format) and re-renders itself (or re-computes its internal state). The bag, then, is the shared resource among modules (not the data, directly) and it is the source of Truth for all data requests.

Epilogue

There is more I could say on the patterns and principles that arose during the execution of this project, but those enumerated above were of the most import and consequence while porting existing code into its new context. Reusable code is not easy to write. It is not automatic. It is the result of thought and discipline that slowly become habit as exercised.

Not all code will be reused; most won’t, in fact. But writing code with a view of extension and reuse in mind can pay off in time and effort in the long run. This is a trade-off, though. The more reusable code tends to be, the more layers of redirection it will possess, necessitating an increase in the number of modules, classes, functions, etc. that need be created. This is a trade-off that can be mitigated by keeping code as simple as possible. Code can be navigated with relative ease if one can reason about it, divining what modules (etc.) do and how they are related through inference.

While I can’t guarantee your experience will be as pleasant as mine, I do believe that if you think about and put these patterns and principles into action you will one day experience the joy of truthfully telling your manager, “oh, that will only take two weeks!” because your diligence produced well-crafted, reusable code.