Protip - return from exceptional conditions early

During a recent code interview, I noticed a React component with a render method written in the following (abbreviated) form,

1
2
3
4
5
6
7
8
9
10
11
12
render() {
return this.state.items.length > 0 ? (
<ComponentWithLotsOfProps
prop1={},
prop2={},
propN={},
...
/>
) : (
''
);
}

where ComponentWithLotsOfProps had at least a dozen props, some of which were not simple primitive values.

While there is nothing technically wrong with this render method, it could be better. It suffers from a few deficiencies.

First, ternaries are objectively difficult to read when they are not short. It is difficult to grok what the method actually produces because the whole ternary is returned, requiring the reader to do double work to find the “implicit” returns (there are two) rather than looking for the easily identifiable return keyword.

Second, one must read the entire method to know what gets returned if there are no items in state. Is it a component? Is it null? Is it an empty string? That is unknown until the whole method has been read.

Third, if additional conditions are required in future work to determine what will be rendered, they cannot easily be introduced in this method.

A better alternative is to omit the ternary, and explicitly return the exceptional condition values first.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
render() {
if (this.state.items.length === 0) {
return '';
}

return (
<ComponentWithLotsOfProps
prop1={},
prop2={},
propN={},
...
/>
);
}

Due to reduced nesting, this is far easier to read, and return values are also easily identifiable. If additional conditions must be evaluated in the future, modifying this method becomes much simpler:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
render() {
if (this.state.items.length === 0) {
return '';
}

if (this.state.items.length == 1) {
return (<SingleItemComponent item={this.state.items[0]} />);
}

return (
<ComponentWithLotsOfProps
prop1={},
prop2={},
propN={},
...
/>
);
}

As with most things in programming: the simpler, more explicit, the better.

Modern JavaScript tooling is too complicated? Hacker News

This post on Hacker News is worth the read, not only for OP’s posted content, but because of the follow-up comments (routinely the better part of Hacker News submission).

You know it’s time for popcorn when the thread starts like this:

…if only the tooling was too complicated, it would not be too bad. IMAO the entire front-end JS world is one big pile of MISERY, complicated is not the word or the problem at all.

Frederick P. Brooks Quotes, Part 1

A couple of years ago I started The Mythical Man Month by Frederick P. Brooks, and I am ashamed to say, got sidetracked about half-way through; however I have recently resumed my reading. Fortunately the chapters stand alone quite well, so the continuity loss is minor. I intend to share, over the course of a few posts, quotes that I find important in the text. Though written across the 70s - 90s, Mythical holds a tremendous amount of wisdom developed within a particular historical context. That wisdom is still relevant today, thought the specific technical challenges that gave rise to that wisdom have morphed – almost beyond recognition – over time.

Preface

Briefly, I believe that large programming projects suffer management problems different in kind from small ones, due to division of labor. I believe the critical need to be the preservation of the conceptual integrity of the product itself.

Chapter 1: The Tar Pit

The obsolescence of an implementation must be measured against other existing implementations, not against unrealized concepts.

Chapter 2: The Mythical Man-Month

…our estimating techniques fallaciously confuse effort with progress, hiding the assumption that men and months are interchangeable.

Cost does indeed vary as the product of the number of men and the number of months. Progress does not.

Men and months are interchangeable commodities only when a task can be partitioned among many workers with no communication among them.

When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on the schedule. [emphasis mine]

Since software construction is inherently a systems effort—an exercise in complex interrelationships—communication effort is great, and it quickly dominates the decrease in individual task time brought about by partitioning. Adding more men then lengthens, not shortens, the schedule.

…the urgency of the patron may govern the scheduled completion of the task, but it cannot govern the actual completion.

…false scheduling to match the patron’s desired date is much more common in our discipline than elsewhere in engineering.

Adding manpower to a late software project makes it later. [emphasis mine]

The number of months of a project depends upon its sequential constraints. The maximum number of men depends upon the number of independent subtasks. From these two quantities one can derive schedules using fewer men and more months. (The only risk is product obsolescence.) One cannot, however, get workable schedules using more men and fewer months.

Chapter 3: The Surgical Team

…the sheer number of minds to be coordinated affects the cost of the effort, for a major part of the cost is communication and correcting the ill effects of miscommunication (system debugging). This, too, suggests that one wants the system to be built by as few minds as possible.

This then is the problem with the small, sharp team concept: it is too slow for really big systems.

For efficiency and conceptual integrity, one prefers a few good minds doing design and construction. Yet for large systems one wants a way to bring considerable manpower to bear, so that the product can make a timely appearance.

In the surgical team, there are no differences of interest, and differences of judgment are settled by the surgeon unilaterally. These two differences—lack of division of the problem and the superior-subordinate relationship—make it possible for the surgical team to act uno animo.

Chapter 4: Aristocracy, Democracy, and System Design

I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas. [emphasis mine]

For a given level of function, however, that system is best in which one can specify things with the most simplicity and straightforwardness.

Conceptual integrity in turn dictates that the design must proceed from one mind, or from a very small number of agreeing resonant minds. [emphasis mine]

…the lack of conceptual integrity made the system far more costly to build and change, and I would estimate that it added a year to debugging time.

The opportunity to be creative and inventive in implementation is not significantly diminished by working within a given external specification, and the order of creativity may even be enhanced by that discipline. The total product will surely be.

…refrain from hiring implementers until the specifications are complete. This is what is done when a building is constructed.

…the integral system goes together faster and takes less time to test… a widespread horizontal division of labor has been sharply reduced by a vertical division of labor, and the result is radically simplified communications and improved conceptual integrity.

Chapter 5: The Second-System Effect

How does the architect avoid the second-system effect [i.e., the tendency to over-compensate for things that were sidelined in the first system, or prototype, by introducing massive bloat in the second iteration of a piece of software]? Well, obviously he can’t skip his second system. But he can be conscious of the peculiar hazards of that system, and exert extra self-discipline to avoid functional ornamentation and to avoid extrapolation of functions that are obviated by changes in assumptions and purposes.

Chapter 6: Passing the Word

The architect must always be prepared to show an implementation for any feature he describes, but he must not attempt to dictate the implementation.

Chapter 7: Why did the Tower of Babel Fail?

The purpose of organization is to reduce the amount of communication and coordination necessary… [emphasis mine]

The job done least well by project managers is to utilize the technical genius who is not strong on management talent.

Chapter 8: Calling the Shot

…the estimating error [of delivery time] could be entirely accounted for by the fact that his teams were only realizing 50 percent of the working week as actual programming and debugging time. Machine downtime, higher-priority short unrelated jobs, meetings, paperwork, company business, sickness, personal time, etc. accounted for the rest. In short, the estimates made an unrealistic assumption about the number of technical work hours per man-year. [emphasis mine]

Chapter 10: The Documentary Hypothesis

Conway’s Law predicts: “Organizations which design systems are constrained to produce systems which are copies of the communication structures of these organizations.” Conway goes on to point out that the organization chart will initially reflect the first system design, which is almost surely not the right one. If the system design is to be free to change, the organization must be prepared to change. [emphasis mine]

…writing the decisions down is essential. Only when one writes do the gaps appear and the inconsistencies protrude. The act of writing turns out to require hundreds of mini-decisions, and it is the existence of these that distinguishes clear, exact policies from fuzzy ones.

Chapter 11: Plan to Throw One Away

…the programmer delivers satisfaction of a user need rather than any tangible product… [emphasis mine]

Program maintenance involves no cleaning, lubrication, or repair of deterioration. It consists chiefly of changes that repair design defects. [emphasis mine]

[The total cost of maintaining a widely used program] is strongly affected by the number of users. More users find more bugs. [emphasis mine]

All repairs tend to destroy the structure, to increase the entropy and disorder of the system. Less and less effort is spent on fixing original design flaws; more and more is spent on fixing flaws introduced by earlier fixes.

Chapter 12: Sharp Tools

I have postulated one toolmaker per team. This man masters all the common tools and is able to instruct his client-boss in their use. He also builds the specialized tools his boss needs.

Chapter 13: The Whole and the Parts

The most pernicious and subtle bugs are system bugs arising from mismatched assumptions made by the authors of various components.

[Niklaus Wirth’s] procedure is to identify design as a sequence of refinement steps. One sketches a rough task definition and a rough solution method that achieves the principal result. Then one examines the definition more closely to see how the result differs from what is wanted, and one takes the large steps of the solution and breaks them down into smaller steps. Each refinement in the definition of the task becomes refinement in the algorithm for solution… from this process one identifies modules of solution or of data whose further refinement can proceed independently of other work… [use] as high-level a notation as is possible at each step, exposing the concepts nad concealing the details until further refinement becomes necessary.1

Many poor systems come from an attempt to salvage a bad basic design and patch it with all kinds of cosmetic relief.

To be continued…


  1. I call this process “rought draft programming” when applied to implementation. When applied to planning, this is essentially the he heart of the agile feedback loop.

Destructuring Reconsidered

While working with React for the last five months, I’ve noticed that React developers make extensive use of object destructuring, especially in function signatures. The more I use React the less I like this trend, and here are a few, short reasons why.

There are countless books by wise industry sages1 that discuss how to write good functions. Functions should do one thing, and one thing only; they should be named concisely; their parameters should be closely related; etc. My observation is that destructured function parameters tend to quickly lead to violations of these best practices.

First, destructuring function parameters encourages “grab bag” functions where the destructured parameters are unrelated to each other. From a practical point of view, it is the destructured properties of the actual parameters that are considered, mentally, as parameters to a function. At least, the signature of a destructured function reads as if they are:

1
function foo({ bar, baz }, buzz) {}

A developer will read this as if bar, baz, and buzz are the actual parameters of the function (you could re-write the function this way, so they might as well be), but this is incorrect; the real parameters are buzz and some other object, which, according to best practice should be related to buzz. But because the first parameter (param1) is destructured, we now have properties bar and baz which are one step removed from buzz, and therefore the relationship between param1 and buzz is obscured.

This can go one of three ways:

  1. if param1 and buzz are related, we do not know why;
  2. if param1 and buzz are not related (but bar and baz are related to buzz) then the function is poorly written;
  3. if bar, baz, param1, and buzz are all closely related, then the function is still poorly written, as it now has three “virtual parameters” instead of just two actual parameters.

Second, destructured functions encourage an excessive number of “virtual parameters”. For some reason developers think this function signature is well written:

1
2
function sendMail({ firstName, lastName, email}, { address1, city, state, zip}, { sendSnailMail }) {}
// function sendMail(user, address, mailPreferences) {}

“But it only has three parameters!”, they say. While technically true, the point of short function signatures is to scope the function to a single, tangible task and to reduce cognitive overhead. For all practical purposes this function has eight parameters. And while the purpose of this function is fairly obvious based on its name, less expressive functions are far more difficult to grok.

Third, destructuring makes refactoring difficult. Sure, our tools will catch up some day. But from what I’ve seen modern editors and IDEs cannot intelligently refactor a function signature with destructured parameters, especially in a dynamic/weak typed language like JavaScript. The IDE or editor would need to infer the parameters passed into the function by examining invocations elsewhere in code, and then infer the assignments to those parameters to determine which constructor function or object literal produced them, then rewrite the properties within those objects… and you can see how this is a near impossible feat. Or at the very least, how even the best IDEs and editors would introduce so many bugs in the process that the feature would be avoided anyway.

Fourth. Often developers must trace the invocation of a function to its definition. In my experience, code bases typically have many functions with the same name used in different contexts. Modern tools are smart, and examine function signatures to try and link definitions to invocations, but destructuring makes this process far more difficult. Given the following function definition, the invocations would all be valid (since JS functions are variadic), but if a code base had more than one function named foo, determining which invocation is linked to which definition is something of a special nightmare.

1
2
3
4
5
6
7
8
9
10
11
12
// in the main module
function foo({ bar, baz}, { bin }, { buzz }) {}

// in the bakery module
function foo(bar, { baz }) {}

// invocations
foo({ bar, baz });

foo(anObject, anotherObject);

foo(1, { bin }, null);

In contrast, functions with explicitly named parameters (usually the signature parameters are named the same as the variables and properties used to invoke the function) make these functions an order of magnitude easier to trace.

Fifth, destructured parameters obscure the interfaces of the objects to which they belong, leaving the developer clueless as to the related properties and methods on the actual parameter that might have use within the function. For example:

1
function handle({ code }) {}

What else, besides code may exist in the first parameter that will allow me to more adequately “handle” whatever it is that I’m handling? The implicit assumption here is that code will be all I ever need to do my job, but any developer will smirk knowingly at the naivety of that assumption. To get the information I need about this parameter I have to scour the documentation (hahahahaha documentation) in hopes that it reveals the actual parameter being passed (and doesn’t just document the destructured property), or manually log the parameter to figure out what other members it possesses. Which brings me to my last point:

Logging. I cannot count the number of times I have had to de-destructure a function parameter in order to log the complete object being passed to the function, because I needed to know some contextual information about that object. The same applies for debugging with breakpoints. (I love when Webpack has to rebuild my client code because I just wanted to see what actual parameter was passed to a function. Good times.)

Don’t get me wrong – I’m not completely against destructuring. I actually like it quite a bit when used in a way that does not obscure code, hinder development, or hamstring debugging. Personally I avoid destructuring function parameters in the signature, and instead destructure them on the first line of the function, if I want to alias properties with shorter variable names within the function.

1
2
3
4
5
6
function sendEmail(user, address, mailPreferences) {
const { firstName, lastName, email } = user;
const { address1, city, state, zip } = address;
const { sendSnailMail } = preferences;
//...
}

This pattern both conforms to best practices for defining functions, and also gives me a lightweight way to extract the bits of information I need from broader parameters, without making it painful to get additional information from those parameters if I need it.

Don’t use the new shiny just because it’s what all the cool kids do. Remember the wisdom that came before, because it came at a cost that we don’t want to pay again.


  1. Clean Code, Code Complete, etc.

Book Review - How to Watch TV News by Neil Postman

My grandparents were religious about watching the evening news. On the occasions that we visited them I recall the family gathering in the living room after dinner to watch the local half-hour news show, followed by the weather. All activity in the house ceased for those precious minutes, and all eyes were glued to the cathode ray tube’s mesmerizing colors.

Now, as an adult, I take for granted the never-ending stream of news available to me 24 hours a day, seven days a week, every day of the year – even holidays. Where my grandparents had a brief window to the wider world, I live outside the house entirely, to the point where I do not find it odd to worry about affairs in foreign countries that will never, ever affect my daily life. But for some reason I know about them, because they are news, and news – we are told – is important.

In 1992, cultural critic Neil Postman and journalist Steve Powers published a book called How to Watch TV News. I can’t say what reception the book received, but it was at least significant enough for an updated edition to appear in 2008. That a mere sixteen year gap warranted an updated edition is a testament to the speed with which technology has transformed the delivery and consumption of news itself. The first publication, however, deals with news as Postman and Powers – and myself, as a child – experienced it in the 80s and early 90s, largely delivered to homes in regular time slots from well-known networks.

“What is news?” Postman asks. (I shall refer to both Postman and Powers as “Postman” in the remainder of this review, only because I am more familiar with his work.) Most people would answer that news is the most important events that occur during the day. But many important events occur during the day, and the news only occupies a limited time slot on network television in a 24 hour period. (Even with modern technology, more significant events occur daily than could possibly be covered in 24 hours, even if the manpower and network bandwidth were available.) News, then, is a curated selection of events on which someone reports. And the person reporting is likely not the person who decides what events are covered, though they are the person responsible for interpreting events, and relaying that interpretation to viewers.

What criteria are used to determine which daily events are most newsworthy? To answer this question, Postman looks at how news networks make money. Advertisers spend dollars to place commercials on networks that attract eyes, so to be profitable – that is, to court advertisers – news programs need eyes. News stories, according to Postman, are selected based on which stories will get the most eyes to remain on the screen for the duration of the program. Compared to regular entertainment programs, news programs are relatively inexpensive to produce, but tend to attract significant viewership, so their profit margins are higher. Since audiences who watch news programs tend to be more educated, attentive, and have more money, they are more susceptible to clever advertising.

News networks employ a number of interesting strategies to keep audiences hooked during programs. Popular entertainment shows are often lead-in programs that already draw a significant audience, and which are likely to leave eyes lingering on the couch after they conclude. “Teasers” for upcoming news are peppered within these programs – “Stay tuned for a story of murder and mayhem, coming up at 5!” – the visual equivalent of “clickbait” designed to entice audiences to stay. Anchors and their supporting staff that cover weather, sports, etc. are all chosen with a view to their aesthetic in mind. Better looking people are paid more to fill a part – to be an actor – in the drama that is TV news. Together this ensemble forms a “family” metaphor: two co-anchors (of the opposite sex) that are “husband and wife”, and their subordinates who play the children. Viewers are brought into their happy home as guests. Everyone in the family has a role to play, and more importantly, everyone is happy to play that role. It surpasses Ward and June Cleaver’s family as the ideal.

These actors are paid large sums to deliver news to audiences, but reporters write and frame the stories anchors deliver according to their own mental points of origin. When covering events, reporters employ three types of language: description (what actually happened), judgement (how they feel, morally, about what happened), and inference (drawing conclusions about related ideas based on judgement). The very language with which reporters communicate news has connotative meaning that goes beyond the visuals that “show” the story to a viewer. In fact, pictures only speak to the concretes, the particulars, of reality. They do not deal in abstractions at all. Language is the means by which humans unify the infinite variety of particulars in the universe, enabling us to deal with it in meaningful (and sane) ways. So while viewers might think that the images or video being displayed on a nightly news program is “news”, it is in fact not – “news” is the reporter’s connotations about those images, laced, as they are, by a context that is often not conveyed. Remember, Postman tells us, that different people experience events in different (often contradictory ways). Eye witness testimony is of dubious reliability at best; and what is news, except the eyewitness testimony of a single reporter?

A news program’s time limitations place temporal restrictions on the quantity of information that can be squeezed into the program.

“Time works against understanding, coherence, even meaning.”

The more instantaneous information is delivered, the less historical context and analysis can be delivered with it. Shorter news segments mean that context is necessarily dropped in favor of more visually scintillating content. Reporters or anchors may make contextual comments, but they are usually passed off as errata to an otherwise complete visual work. The increasing number of retractions, updates, and corrections in more modern news stories proves Postman’s point. To competently watch the news, then, a viewer must be armed with context already – from books, articles, and other sources of information. The viewer must not be a passive vessel to be filled with news, but must be an active participant and critic of the news.

Postman’s final chapter contains eight recommendations for people who watch TV news. These recommendations stand the test of time, and they may be applied to news one receives from any visual medium, including the Internet (YouTube and Facebook, especially).

1 - “In encountering a news show, you must come with a firm idea of what is important.” A viewer must understand that news is delivered to the public based on the financial interests of the network. To paraphrase Postman, reporters are not as powerful as accountants. Viewers will only be as competent in their consumption of news as they have been diligent in the development of their own knowledge.

2 - “In preparing to watch a TV news show, keep in mind that it is called a ‘show’.” Teasers, soundtracks, fancy visuals, photogenic anchors – these are the things of entertainment, and they are calculated to affect viewers emotionally. TV news is drama larping as education.

3 - “Never underestimate the power of commercials.” Commercials, Postman writes, are “a new, albeit degraded means of religious expression in that most of them take the form of parables, teaching people what the good life consists of… that, in fact, is one of the reasons commercials are so effective. People do not usually analyze them. Neither, we might say, do people analyze biblical parables, which are often ambiguous…”

4 - “Learn something about the economic and political interests of those who run TV stations.” Since all news is chosen and delivered through a filter of values, to judge it competently a viewer must know something about those from whom it is delivered. In the 80s and 90s this may have been a more difficult task; in the twenty-first century it seems that all reporters wear political affiliations on their sleeves.

5 - “Pay special attention to the language of newscasts.” Language frames reality, and also betrays the biases and assumptions of the people using it. Since the purview of television news is to arrest the viewer for ratings (and hence, lure advertisers), it can be assumed that the language chosen to convey news will be calculated to provoke maximum emotional response, whether warranted or not. Perhaps this is why, Postman writes, “people who are heavy television viewers, including viewers of television news shows, believe their communities are much more dangerous than do light television viewers. Television news, in other words, tends to frighten people.” The more hysteria that can be packed into every sentence a reporter writes, the better. People love watching train wrecks.

6 - “Reduce by at least one-third the amount of TV news you watch.” The reasons, by now, should be obvious. Spend your freed time reading.

“…each day’s TV news consists, for the most part, of fifteen or so examples of one or the other of the Seven Deadly Sins… It cannot possibly do you any harm to excuse yourself each week from acquaintance with thirty or forty of these examples… TV news does not reflect normal, everyday life.”

7 - “Reduce by one-third the number of opinions you feel obliged to have.” One interesting side-effect of TV news is that it compels people to feel like they ought to parrot what has been reported, and that they are morally or intellectually inferior if they reserve judgement or admit to ignorance on a reported subject. But this is nonsense; insanity, even. No well informed insights can come from sound bytes and contextless reporting.

8 - “Do whatever you can to get schools interested in teaching children how to watch TV news shows.” Perhaps “critical viewing” could be taught alongside critical thinking in school classrooms. Students are certainly exposed to far more news than is appropriate for their happiness and well-being. We should consider it morally obligatory to equip them to deal with the deluge sooner rather than later.

Though dated, How to Watch TV News has a tremendous amount of insights for the consumption of any sort of visual media. The Internet has, by and large, taken the place of television in the twenty-first century, and the media establishment – of which the term “fake news” sticks like spaghetti to a wall – looses its collective shit daily. Information comes to us at a tremendously unhealthy rate, overwhelming the sense and clouding the mind, yet our intellectual and moral standings, our very identities, in fact, are judged according to which news source gains our allegiances. Perhaps news is not as important as we think it is. Perhaps it is more important to step back and ask what we should know, and why it’s important, before becoming a passive receptacle for someone else’s answers to those questions. Postman thinks so, and I agree.

Book Review - On the Meaning of Life by Will Durant

Would you know what to say to a total stranger who asked you to convince him not to commit suicide?

In 1930, that is the very situation that prompted historian Will Durant to ponder and write about the most profound question of all: what is the meaning of life? After ad libbing his own answer to a desperate soul whom he never saw again, he penned a letter to the foremost minds of his time, inquiring: “…what are the sources of your inspiration and your energy, what is the goal or motive-force of your toil, where you find your consolations and your happiness, where, in the last resort, your treasure lies?” Some responded, and in 1931 Durant compiled their letters in a short book, On the Meaning of Life.

Among his respondents were Mohandas Gandhi, H. L. Menken, Sinclair Lewis, Dr. Charles Mayo, George Bernard Shaw, Bertrand Russell, and others. Many replies contained contributory responses; some were terse and dismissing, but Durant reported each in good spirits and with the dignity to laugh at those who considered it beneath their time to be thorough with him.

Durant spends the first six chapters discussing why modern man is increasingly inclined to hopelessness and despair, leading to an annual increase in suicides. The old ways, the old sources of meaning – religion and tradition – had been relegated to myth and legend by scientists and historians. All the while man’s view of himself became more mechanistic, more deterministic, and the gains in knowledge, though dispelling false beliefs of the past, offered up no unifying system of hope and significance for newly untethered minds. The world seemed hopeless, Durant concluded, but there were many – he among them – who believed that lost hope is not necessarily a hopeless loss.

The replies are grouped into chapters based on the the overall characteristics that categorize the respondents:

  • the men of letters
  • entertainers, artists, scientists, and educators
  • the religionists
  • the women1
  • a prison convict serving a life sentence
  • the skeptics

Without spoiling the joy of reading each reply for yourself, I want to call your attention to several ideas that I think form the meat of the most articulate replies.

Some respondents found purpose in their work, but not just because they felt productive. They felt they were uniquely suited, by their own personalities and dispositions, to perform the tasks that ultimately fulfilled them. Meaning, for them, came from the knowledge that their best parts were being utilized in the best possible ways.

Another respondent pointed out that, regardless of how much we claim to know now, we hardly know everything. To conclude definitively that life is meaningless based on so little information is premature at best.

In the perspective of another, the desire for immortality is tied to our desire for meaning. We want to be part of something lasting. If immortality is real, and there is a life after this one, we will have the opportunity to experience this. But if not, even though we won’t live forever, we will never be conscious of not living. In our own minds, we will be, then we will be not; in either case, we should live as if immortal because practically, we are.

Finally, the longest and most touching reply came from a convict serving a life sentence in Sing Sing prison. I take the liberty of quoting a bit from it here:

“Truth is not beautiful, neither is it ugly. Why should it be either? Truth is truth, just as figures are figures. When a man wishes to learn the exact condition of his business affairs, he employs figures and, if these figures reveal a sad state of his affairs, he doesn’t condemn them and say that they are unlovely and accuse them of having disillusioned him. Why, then, condemn truth, when it only serves him in this enterprise of life as figures server him in his commercial enterprises? That idol-worshipping strain in our natures has visioned a figure of Truth draped in royal raiment and, when truth in its humble form, sans drapery, appears to us, we cry, ‘Disillusionment.’

Custom and tradition have caused us to confuse truth with our beliefs. Custom, tradition and our mode of living have led us to believe we cannot be happy, save under certain physical conditions possessed of certain material comforts. This is not truth, it is belief. Truth tells us that happiness is a state of mental contentment. Contentment can be found on a desert island, in a little town, or the tenements of a large city. It can be found in the palaces of the rich or the hovels of the poor.

Confinement in prison doesn’t cause unhappiness, else all those who are free would be happy. Poverty doesn’t cause it, else the rich all would be happy. Those who live and die in one small town are often as happy, or happier than many who spend their entire lives in travel… Happiness is neither racial, nor financial, nor social, neither is it geographical…

Reason tells us that it is a form of mental contentment and – if this be true – its logical abode must be within the mind.

The final chapter in his book contains Durant’s answers to his own questions, formulated in the same year after receiving “several letters [from others] announcing suicide”. His reply is titled “Letters to a Suicide” and is a beautiful call to find meaning within the very improbability of life itself; that we have it, and that it offers us actual joy and happiness is meaningful.

“Nature will destroy me, but she has a right to – she made me, and burned my senses with a thousand delights; she gave me all that she will take away. How shall I ever thank her sufficiently for these five senses of mine – these fingers and lips, these eyes and ears, this restless tongue and this gigantic nose?”

Overall I give the book 4/5 stars. Durant’s prose is, as ever, mind candy. The variety of responses in content, length, and depth – and their sources and historical context – give the reader much to think about and, surprisingly, don’t attempt to over-simplify or trivialize Durant’s questions. My only (minor) complaint regards the book’s length. It seems too short for such a complex subject, and I would have enjoyed, very much, additional material collected over a longer period of time. I cannot fault Durant though. Faced with the despair of suicidal strangers, I believe he pushed to collect the best answers in the most condensed form possible. The result is rich, and worth reading.


1 Recall that the year was 1931, and the role of woman was undergoing metamorphosis. That Durant devoted a chapter to women he greatly respected is notable. Durant was very eager to see women contribute to the “great conversation” of history. His wife Ariel, a co-author on many of Durant’s own works, shared this passion.

Creating Reusable Code

Create reusable software is challenging, especially when that software may be reused in situations or scenarios for which it may not necessarily have been designed. We’ve all had that meeting where a boss or manager asked the question: “What you’ve designed is great, but can we also use it here?”

In the last month I’ve had this exact experience, from which I’ve learned a number of valuable lessons about crafting reusable software.

eTexts and annotations

When I first started working for eNotes, my initial task was to fix some code related to electronic texts that we displayed on our site (e.g., Shakespeare, Poe, Twain, etc.). We have a significant collection of annotations for many texts, and those annotations were displayed to users when highlights in the text were clicked. A couple of years ago we spun this technology off into a separate product, Owl Eyes, with additional teacher tools and classroom management features. Because of my experience with the existing eText and annotation code, and because I am primarily responsible for front-end JavaScript, I was tasked with building a “Kindle-like” experience in the browser for these eTexts. (This is one of the highlights of my career. The work was hard, and the edge cases were many, but it works very well across devices, and has some pretty cool features.)

Filtering, serializing, and fetching annotation data

The teacher and classroom features introduced some additional challenges that were not present when the eText content was first hosted on enotes.com. First, classrooms had to be isolated from one another, meaning that if a teacher or student left an annotation in an eText for their classroom, it would not be visible to anyone outside the classroom. Also, a teacher needed the ability to duplicate annotations across classrooms if they taught multiple courses with the same eText. Eventually we introduced paid subscriptions for premium features, which made annotation visibility rules even more complicated. All Owl Eyes Official annotations are available for free, public viewing, but certain premium educator annotations are restricted to paid subscribers. (Also, students in a classroom taught by a teacher with a paid subscription are considered subscribers, but only within that classroom’s texts!) It was complicated.

We devised a strategy whereby a chain of composable rules could be applied to any set of annotations, to filter them by our business requirements. These rules implemented a simple, identical interface, and each could be passed as an argument to another to form aggregates. The filtered annotation data was then serialized as JSON and emitted onto the page server-side. When the reader renders in the client this data is deserialized and the client-side application script takes over.

The role that a given user possesses in the system often determines if they can see additional meta-data related to annotations, or whether they can perform certain actions on those annotations. These were communicated to the front-end to enable/disable features as needed, and then enforced on the back-end should a clever user attempt to subvert the limitations of his own role. To keep the data footprint as light as possible on the page, we developed a composable serialization scheme that could be applied to any entity in our application. The generic serialization classes break down an entity’s data into a JSON structure, while more specialized serialization classes add or remove data based on a user’s role and permissions. In this way a given annotation might contain meta-data of interest to teachers, but would exclude that meta-data for students. Additional information is added if the user is an administrator, to give them better control over the data on the front-end.

The end result is that, from a user’s perspective, the annotations visible to them, and the data within those annotations, are tailor-made to the user when the eText reader is opened.

Fast-forward to the present day. I have recently been tasked with bringing our eTexts and annotations full circle, back to enotes.com. We brainstormed about the best way to make this happen, as enotes.com lacks the full eText and annotation data, as well as the rich front-end reading experience.

We decided that since the eText and annotation data was already being serialized as JSON for client-side consumption in owleyes.org, it would be trivial to make that same data available via an API. I implemented a simple controller that made use of Symfony’s authentication mechanisms for authenticating signed requests via API key pair, and returned annotation JSON data in the exact same manner that would be used for rendering that data in the eText reader. On inspection, I realized that some of the annotation data wasn’t relevant to what we wanted to display on enotes.com, so I quickly created new serialization classes that made use of existing serialization classes, but plucked unwanted data from their generated JSON structures before returning it. No changes were necessary to the annotation filtering rules, as an API user is, from the ruleset’s perspective, a “public user”, and so would see the same annotation data that users who aren’t logged in on the site would see.

Fetching this data on enotes.com was a simple matter of using PHP’s CURL classes to request data from the owleyes.org endpoint.

The user interface

The eText reader JavaScript code on owleyes.org is complex; it is composed of many different modules – view modules, state modules, utility modules, messaging modules, etc. – that interact together to form a smooth, reading experience. It is far more interactive than the pages we wanted to display on enotes.com, so I initially worried that the code would not be entirely reusable because of its complexity.

I was pleasantly wrong.

When I write software I take great pains to decouple code, favor composition over inheritance, and observe clear, strict, and course API boundaries in my modules and classes. I, as every programmer does, have a particular “style” of programming – the way I think about and model problems – which, in this case, served me very well.

I copied modules from the owleyes.org codebase into the enotes.com codebase that I knew would be necessary for the new eText pages to function. With some minor adjustments (mostly related to DOM element identifiers and classes) the code worked almost flawlessly. Where I needed to introduce new code (we’re using a popup to display annotations in enotes.com, whereas in owleyes.org we use a footer “flyout” that cycles through annotations in a carousel) the APIs in existing code were so well defined that I was able to adapt to them with few issues. Where differing page behavior was desired (e.g., the annotation popup shifts below the annotation when it gets too close to the top of the screen as the reader scrolls, and above otherwise) the decoupled utility modules that track window and page state already provided me with the events and information I needed to painlessly implement those behaviors. And because the schema of the serialized annotation data delivered over the API was identical to the JSON data embedded in the owleyes.org reader, the modules that filtered, sorted, and otherwise manipulated that data did not change at all.

Why it worked

Needless to say, this project left me very satisfied as a developer. When your code is painlessly reused in other contexts it means you’ve something right. I’ve made some observation about what made this reuse possible.

First, reusable code should model a problem, or a system, in such a way that the constituent components of that model can act together, or be used in isolation, without affecting the other parts of the model. Modules, classes, and functions are the tangible building blocks we use to express these models in software, and they should correspond with the way we think about these models in our heads. Each should be named appropriately, corresponding to some concept in the model, and the connections between them should be well understood and obvious. For example, in the eText reader, a tooltip is a highlighted portion of text that may be clicked on to display an annotation popup, which displays annotation information. The tooltip and annotation popup are components in the visual model; they are named appropriately, and the relationship between them is one-way, from tooltip to popup.

Second, a given problem may in fact be composed of multiple models that are being run at the same time. Modules that control the UI are part of the visual or display model; modules that control the access to, and filtering of, data are part of the domain model. Modules track mouse movements, or enable/disable features based on user interaction, are part of the interaction model. Within these models, objects or modules should only perform work that makes sense within the purpose of the model. Objects in the visual model should not apply business rules to data, for example. When one or more objects exhibit behaviors from multiple models, extracting and encapsulating the behavior that is not part of each object’s primary model makes that object more reusable.

Third, objects within a model should have well-defined, coarse APIs. (In the context of objects, an API is an object’s “public” methods to outside callers, or to the objects that extend it.) A coarse API is one that provides the least amount of functionality that its responsibilities require. Yes, the least. An object either stands alone, or makes use of other objects to do its work. If the methods on an object are numerous the object can likely be broken down into several smaller objects to which it will delegate and on which it will depend to do its work internally. Ask: what abstraction does this object represent, and which methods fulfill that abstraction. Likewise the parameters to an object’s methods can often be reduced by passing known state to the object’s constructor (or factory function, or whatever means are used to create the object). This chains the behavior of the object to a predetermined state – all remaining method arguments are only augmentations to this state. If the state needs to change, another object of the same type, with different state, is created and used in its stead. The API is coarse because the methods are few, and their parameters are sparse.

Fourth an object’s state should be stable at all times. Its initial state should be set, completely, through the object’s source of construction (whether by data provided via parameters, or sensible defaults, or both). Properties on objects should be considered read-only, as they represent a “window” into the object’s state. Computed properties should be calculated whenever an object’s relevant internal state changes, usually the result of a method invocation. I avoid exposing objects that can be manipulated by reference through properties; properties are always primatves that can be re-produced or re-calculated, or collections of other “data” objects that have the same characteristics (usually cloned or reduced from some other source). If an object needs to expose information from one of its internal children, I copy that information from the internal source to a primitive property on the external object itself. If the information is itself in the form of an object with multiple properties, I flatten those into individual properties on the external object. The end result is that an object’s state is always generated internally, as a consequence of method invocations, and cannot be manipulated externally, except by way of its public API (methods).

Finally, shared data should exist in “bags” – objects that jealously guard data and only deliver data by value to callers when asked. For example, on owleyes.org a given chapter in Hamlet may contain hundreds of annotations. Annotations may be crated, edited, deleted, and receive replies in client code. The annotation bag is responsible for holding the annotation data and delivering it, in read-only format, to other modules as requested so that they can render themselves (or perform computations) accordingly. When an annotation changes – when an owleyes.org PUT request is sent to the API and a successful response is received – a method on the bag is invoked to update the annotation. Because annotations are only fetched by value, it does no good for the module that initiated the update to directly manipulate the properties on its own annotation object. No other module will receive the change. Instead, the responsible module tells the bag to update the annotation by passing it the new annotation deserialized from the API response. The bag replaces the annotation in its internal collection and then raises an event to notify listening modules that the given annotation has changed. Any module interested in that annotation – or all annotations – then requests the updated data (in read-only format) and re-renders itself (or re-computes its internal state). The bag, then, is the shared resource among modules (not the data, directly) and it is the source of Truth for all data requests.

Epilogue

There is more I could say on the patterns and principles that arose during the execution of this project, but those enumerated above were of the most import and consequence while porting existing code into its new context. Reusable code is not easy to write. It is not automatic. It is the result of thought and discipline that slowly become habit as exercised.

Not all code will be reused; most won’t, in fact. But writing code with a view of extension and reuse in mind can pay off in time and effort in the long run. This is a trade-off, though. The more reusable code tends to be, the more layers of redirection it will possess, necessitating an increase in the number of modules, classes, functions, etc. that need be created. This is a trade-off that can be mitigated by keeping code as simple as possible. Code can be navigated with relative ease if one can reason about it, divining what modules (etc.) do and how they are related through inference.

While I can’t guarantee your experience will be as pleasant as mine, I do believe that if you think about and put these patterns and principles into action you will one day experience the joy of truthfully telling your manager, “oh, that will only take two weeks!” because your diligence produced well-crafted, reusable code.

My Favorite Book of 2018

I was asked to write about the best book I’ve read in 2018 in 200 words or less. Here we go.

My current obsession is authors Will and Ariel Durant, two of this centuries most prolific historians and pure joys to read. This year I read The Story of Philosophy by Will Durant, a work which reminds me that the past is not so unlike the present, and that the problems of humanity now are the same problems humanity has always faced. Philosophy tells the story of fifteen Western philosophers, from Plato to Dewey, explaining the ideas of each through the lenses of their personal experiences and cultures. Of each, my favorite is Spinoza, a Jewish philosopher who envisioned a god beyond that of his youth. For that he and his progeny were literally cursed by his peers and his people, forcing him, as an exile, to seek refuge among the Dutch. While I disagree with Spinoza’s metaphysics, Durant so masterfully presents a paramount human that I cannot but fall in love with his ethos: tolerance and benevolence that leaves humans free to express convictions peacefully. Philosophy stimulates the mind with both rich ideas and eloquent prose as it brings great ideas and great thinkers to the layman. It has earned its place well on my list of favorite books.

READ IT.

Politicians don't understand the Internet (or anything else)

The internet is justifiably ablaze with criticism of Rudy Giuliani’s recent Tweet in which he blames Twitter for linking his own fat-fingered mistype to an anti-Trump website. I feel bad for defending Twitter because the company is no friend of actual free speech, but my loathing for semi-private companies is only eclipsed by my loathing of politicians, so here it goes.

Twitter auto-links anything that appears to be a a URL, roughly defined as text that’s not a dot or a space, followed by a dot, followed by text that’s not a space. The Internet is, of course, defined by URLs, so this feature makes sense for a technology company that hosts Internet-based content.

Giuliani’s tweet contained a poorly punctuated reference to the G-20 summit – a reference which triggered Twitter’s auto-linking feature, and which prompted a tech-savy individual to register the URL to which the tweet auto-linked. At this URL, that irreverent middling pleab sought to get under aristocrat Giuliani’s skin by hosting an anti-Trump message, for which Giuliani blamed Twitter, proving his own ignorance of how the Internet (and likely most other technology) works. Of course he’s not alone. Who can forget the Net Neutrality debate in which one of our revered statesman referred to the Internet as “a series of tubes”?

If Giuliani were my grandfather I could perhaps forgive him for this faux pas, but he’s not. He’s a politician who has a direct impact on the “relationship” between government and the private technology sector. The problem is obvious. Since the government’s only hammer is violence and force, the last person who should be in an authoritative position is someone who has no idea how to identify an actual nail.

As someone who grew up in the Internet era, it’s easy to shrug off this ignorance as just the province of an aged aristocracy who did not. But this is not just a case of generational ignorance. I recall, as a middle-school student, visiting our legislators in Jefferson City. I sat in on a committee meeting in which my representatives discussed the exact content of a public school curriculum that they considered necessary for the education of a younger populace. And I realized, in that moment, that very few (if any) of my representatives had any clue about what constituted actual education, or possessed reasons for their opinions other than it would sound good to their constituents and thus ensure their re-election.

The truth is that politicians are rarely qualified to judge the milieu in which they legislate. And that’s because they are normal human beings, and like the rest of us, their ignorance outweighs their knowledge in most things. “But they have access to specialists who advise them!”, some say. True. And most people have access to the Internet – the largest repository of information ever collected, indexed, and formatted in a user-friendly way. And yet most of us would agree that access to information and expert opinion does not a wise person make.

Human life and social interaction complex and cannot be reduced to committee meeting decisions. Even the most qualified professionals are limited to their own experiences and knowledge.

Economist Friedrich von Hayek observed that,

“The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.”

His observation can, and should, be extended to legislation and regulation. Who would go to a senator or representative for dental work, solely on the basis of their access to “professional opinion”? Nobody. We should limit government severely. Not because there aren’t good people in government positions (maybe like three), but because no matter how virtuous, well-intentioned, or smart they may be, they are still at most only capable of making the coarsest of decisions for the 325 million people they represent.

We need to invent technology that's never even been invented yet.

Microsoft is building a Chromium web browser to replace Edge on Windows 10

From Windows Central:

Microsoft’s Edge web browser has seen little success since its debut on Windows 10 in 2015…

I’m told that Microsoft is throwing in the towel with EdgeHTML and is instead building a new web browser powered by Chromium, which uses a similar rendering engine first popularized by Google’s Chrome browser known as Blink. Codenamed “Anaheim,” this new browser for Windows 10 will replace Edge as the default browser on the platform, according to my sources, who wish to remain anonymous…

Using Chromium means websites should behave just like they do on Google Chrome in Microsoft’s new Anaheim browser, meaning users shouldn’t suffer from the same instability and performance issues found in Edge today.

If this is true, like most things Microsoft does now it’s too little, too late. The people who care about their browsing already use alternative browsers (and tell their family members to do the same). At most it will alleviate some enterprise developer suffering, as they now have a management-friendly argument to ditch IE support in favor of Anaheim.

I really don’t understand why MS even bothers with browsers anymore. Why not just strike deals with Google, Firefox, and Apple to pre-install their browsers for cash? Does the Edge feature-set (which I assume will be ported to Anaheim) offer more than these browsers and their relatively mature extension communities? I doubt it.

Time will tell.