Black Rednecks and White Liberals by Thomas Sowell

This morning I finished reading Thomas Sowell’s book Black Rednecks and White Liberals.

Sowell has spent a lifetime (he’s 90 years old) studying the root causes of racial tension throughout the world, especially in the United States. Written in 2005, this book is far more relevant today – fifteen years later – than when he first wrote it. But Sowell saw the fomenting racial tensions in this country and vocally protested the policies and ideas that have continued to push us all into corners defined by the past, instead of securing solutions in the present. He offers solutions – not based on popular slogans or emotional sentiment – but based on the hard facts of history; based on how other peoples in other times successfully adapted to each other to make the lives of each more prosperous. If we can’t do that, he warns, the result will be pain and misery for all.

“While the lessons of history can be valuable, the twisting of history and the mining of the past for grievances can tear a society apart. Past grievances, real or imaginary, are equally irremediable in the present, for nothing that is done among living contemporaries can change in the slightest the sins and the sufferings of generations who took those sins and sufferings to the grave with them in centuries past. Galling as it may be to be helpless to redress the crying injustices of the past, symbolic expiation in the present can only create new injustices among the living and new problems for the future, when newborn babies enter the world with pre-packaged grievances against other babies born the same day. Both have their futures jeopardized, not only by their internal strife but also by the increased vulnerability of a disunited society to external dangers… To be relevant to our times, history must not be controlled by our times. Its integrity as a record of the past is what allows us to draw lessons from it.”

Scott Hanselman is Wrong

If you’ve done any Microsoft development in the last two decades you probably know the name Scott Hanselman, and are probably familiar with his blog at hanselman.com. I used to enjoy reading Hanselman’s articles, back when I wrote code for the Microsoft platform, and generally considered him to be a pretty even-keeled individual with generally insightful thoughts and balanced opinions.

That was back before virtue signaling was all the rage of course. Now it appears that he’s fallen into the trap of proving how woke he is by showing us how to change our initial git repository name from master to main.

Why make this change, you ask?

I’ll quote from his blog:

The Internet Engineering Task Force (IEFT) points out that “Master-slave is an oppressive metaphor that will and should never become fully detached from history” as well as “In addition to being inappropriate and arcane, the master-slave metaphor is both technically and historically inaccurate.” There’s lots of more accurate options depending on context and it costs me nothing to change my vocabulary, especially if it is one less little speed bump to getting a new person excited about tech.

Aside from completely misunderstanding the meaning of master in git (think: master copy, not slave-driver), Scott has managed to lower himself to the point of sweating over a metaphor that encompasses all of human history, not just that of the American Antebellum South. (Check out Thomas Sowell’s chapter The Real History of Slavery in his book Black Rednecks and White Liberals. Sowell is a black, award-winning economist who grew up in the Bronx.)

The word master has quite a few other definitions that would be blotted out if we just tossed it to the side because wokeness. For example:

  • having complete control over a situation
  • learning to do something properly
  • the title of famous painters
  • the first and original copy of a recording
  • the head of a ship that carries passengers or goods
  • a college degree
  • a revered religious teacher
  • an abstract thing that has power or influence
  • a main or principal thing
  • etc.

Should we expunge these parts of our language because Scott feels bad about things he didn’t do, hundreds of years before he was born? Should we ditch any word that acts as a trigger mechanism for someone else’s discomfort?

I can tell you right now, moist isn’t making the keep list.

It makes me sad to see Hanselman become a useful idiot, and it makes me sad that, hundreds of years after the destruction of the Western slave trade (by the West!), white people are stooping to this level of bullshit because they’ve adopted some perverse doctrine of racial original sin, and are constantly trying to atone for it.

If Scott thinks self-flagellation will fix any actual racial problems in this country, he’s deluded. Changing a git repo name won’t eliminate qualified immunity, bring corrupt cops to justice, or stop our elected officials from shredding the Bill of Rights. It won’t fix fatherlessness in black America, or the blight of narcissistic, checked-out parents in white. It won’t provide healthcare, it won’t feed the poor, it won’t address mental health issues. It won’t protect us from white school shooters, and it won’t save us from black gang bangers. It’s a token gesture without teeth; emotions on meth.

People like this black American nationalist give me hope, though. He knows that real change will come through brutal honesty and personal responsibility, from all sides, and that’s the only way America can get over the blood feud we’ve been nursing for roughly two hundred years. He gets it, and he doesn’t need the contrition of false white guilt to satisfy a grudge. It’s a breath of fresh air, hearing this man speak, and I hope for all our sakes – even for the sake of Scott Hanselman – that many more like him will bring sanity to the sanitarium that is present-day America.

Bye Bye Google Play Music

This is a post I uploaded to Facebook on March 9.

In my quest to remove my personal data from google services and quit giving them my money, I’ve been searching for a service that would transfer my entire Google Music library (which is substantial) to another service. I don’t mind paying a subscription fee, just don’t want to pay it to Google, but I’ve used Music for so long that transferring my library manually would be monumental.

Over the weekend, however, I discovered Soundiiz.com, which automates the transfer of pretty much any music service to any other. They charge a monthly fee of $4.50 for the service, but I only had one transfer to conduct, and for $4.50 it’s a no-brainer (I will cancel my Soundiiz subscription before I have to pay another month).

In about 30 minutes Soundiiz managed to transfer my entire Google Music library to Spotify with ~90% success rate (not all music on GM is available on Spotify, and I had a lot of custom uploads on GM which, to my knowledge, Spotify does not support). I was blown away at how smooth the process was.

So if you need to get away from a Big Tech music service, I highly recommend checking out Soundiiz. They support pretty much every major (and many minor) music services.

The only thing Soundiiz didn’t sync was my podcasts, but I only had about a dozen or so on GM, so manually setting them up in Spotify was a cinch.

UPDATE: March 19

I am completely happy with my move to Spotify. Not only can I use it on all of my devices, I’ve noticed a few immediate benefits:

  • I can connect remotely to devices in my house that also run Spotify software, and play my music through those devices. So I can control my desktop computer with my phone, which is a huge feature that Google Play lacks.
  • Spotify’s podcast library is bigger than Google Play. Well, at least the podcasts that peak my interests.
  • If you want to listen to your own local music on Google Play, you have to upload it, and you’re limited to a certain amount of songs. (To be fair, it is a generous amount.) Spotify takes a different approach: you configure it to look for music in a directory on your device, and it will play whatever it finds there. If you have multiple devices and want your local library available on each, you will need to sync your local library with a cloud service (I recommend MegaSync).
  • Spotify’s music recommendations seem better. Google Play is decent, but Spotify seems to have more relevant recommendations.

import facepalm;

Sometimes bugs can be particularly evasive, and today I had such a one.

A module in deep in our codebase was throwing an Error, but only in Mozilla’s Firefox browser.

The error was NS_ERROR_ILLEGAL_VALUE.

I did some quick DuckDuckGoing and found that the error occurs when a native DOM function in Firefox is passed a value of a type it does not expect.

The stack trace led back to this line in our application code:

1
2
3
4
5
6
const hit = find( cache, c => c.original === obj );
if ( hit ) {
return hit.copy;
}
// ...some time later...
return someUncachedObject;

“@-E$!&&@#”, I thought. “Why is lodash’s find() function passing a bad value to a native function?”

You see, I use lodash all the time. So much, in fact, that I made one fatal error in my diagnosis.

I assumed that because the find() function was defined, that lodash had indeed been imported.

How. Wrong. I. Was.

It turns out that window.find() is, in fact, a non-standard, but nevertheless nearly omnipresent function that is designed to search a DOM document for a particular string. And since any function attached to window is global, a missing import of the same name – say, a missing lodash/find import – would not raise any alarms. The code built. The code ran. And it ran without error in every browser but Firefox. Why?

The window.find() function expects a first argument of type String. In modern browsers other than Firefox (pretty much all the Chromium-based browsers), passing a non-String argument to window.find() will simply cause the function to return false. As you can see in the snippet above, though rendering the cache useless, the application nevertheless continued to work. In Firefox, however, window.find() will throw if its first argument is not of type String. Thus my bug.

I am ashamed to say how long it took me to realize lodash/find was not the function being called.

In the end I applied the great wisdom of Spock’s ancestors, and started considering things that could not possibly be the case, until it dawned on me that perhaps – just perhaps – find() was not what it appeared to be after all.



And a single import find from "lodash/find"; statement fixed the bug.

Fun with Homebrew casks

One of my favorite utilities for OSX is Homebrew, a package manager that lets you easily install programs from the terminal.

One of my favorite pastimes is thumbing through the Homebrew Cask recipes to find new programs and utilities to install. Some are pretty nifty, like Zotero which manages research bibliographies. Or Electric Sheep which harnesses the power of your “sleeping” devices to crowdsource digital artwork. Or Finicky, which lets you specify which of your web browsers you want to open specific links. (Maybe you use Brave for normal browsing but want to open all google.com links in Chrome.)

Unfortunately the Cask recipe files have no real descriptions, so I usually just fish through them and dig out the homepage links of any Cask recipe file that has an interesting name. It’s kind of like a digital treasure hunt.

To make things even more fun, I cloned the homebrew-cask repo and came up with a simple shell script that will randomly choose a recipe and open its homepage for me.

1
2
3
4
5
6
find ~/projects/homebrew-cask/Casks -type f | \
shuf -n 1 | \
xargs cat | \
ack homepage | \
awk '{print $2}' | \
xargs open

Joker Movie Review

I finally watched Joker last night.

It is dark. But the movie earns all of its praise. It really is an extremely well done film. Everything works: cinematography, acting, music. It is very well crafted.

There are several major intertwined themes running through the film. A lot of people have interpreted it in a lot of ways, and they aren’t wrong; there’s a lot to unpack.

The biggest theme is probably “rich, powerful people are extremely out of touch with average, or below average people”, which we see every day now (Epstein, Weinstein, Clintons, etc.). This is the big Wayne connection to Joker. The movie tips the whole Batman story on its head; you actually feel sympathy for Arthur Fleck, and while you don’t think the Waynes are necessarily “bad”, the film portrays them as very out of touch, and condescending to the plight of the less fortunate. It’s actually well done; I didn’t feel that I was being preached at, like “look at the bad rich people!”. The message was more, “see the chasm here – these people have no idea what it’s like to live in normal society”.

The second is probably the damage that single mothers can do to sons. The void of an absent father can have a shattering impact on a child. This was probably the most gut-wrenching part of the film for me, and I’m surprised the writers tackled it. Well done though.

The next is probably mental health, and how we deal with disturbed people in society. Although I’m not sure I’d call this a “theme” because it’s more of a given in the film, that society handles this poorly. It’s not making a statement about it per se, it’s just assuming it. But the portrayal is well done, and thought provoking.

A tangential theme is that people need some degree of power in their lives – to feel they control something – and when that is taken away, the hopelessness they experience can cause them to seek power in other, socially taboo ways (e.g., violence), not because they would have preferred that, but because they psychologically have no alternative. This theme is more subtle, but probably the biggest statement the movie makes. The progressive criticism (“this movie is just about white, violent, incels!”) targets this theme, because progressives don’t believe that white males can ever be powerless.

Overall I highly recommend the film. As I said, it earns all the praise it gets. It’s one of the few films I’d consider a modern masterpiece.

Book Review - The Mythical Man-Month

The Mythical Man-Month is one of those books that is, well, mythical in the circles to which it pertains – that is, the software engineering and computer science fields. It is mythical because many people have heard of it, most agree that it is “classic”, but not many remember exactly why. Perhaps they have never read it, only absorbed its ideas through hearsay. Or perhaps they did read it, but so long ago that its principles have been taken only in the tide of time.

Either way, I have finally finished the book in what I consider to be an unreasonable amount of time. It’s not overly long, or overly verbose, but I have a bad habit reading a little from a lot of books at the same time, which means I don’t finish a book for a while. I took notes as I went so that hopefully time will be more gracious to my mind when someone asks me, in the years to come, if I’ve read Frederick Brooks.

Widely considered the central theme of the book, Brooks’s Law, in summary, is that adding programmers to a late software project will not make it go faster, but rather slower. This was a pattern Brooks saw during his years as a manager in larger companies that needed many engineers to write software either for internal use, or eventually for sale as products. Managers assumed that the central problem of software development – why projects did not finish on time or on budget – was a mechanical one that could be resolved with mechanical solutions. Maybe there just wasn’t enough manpower. Or maybe the tooling was inferior, and retarded progress. Or maybe the wrong language had been chosen for the task. While all of these things can, and do, affect software engineering endeavors, Brooks’s major insight was that they were the accidents of software engineering, and not its essence; and that the essence is what causes projects to fail.

The essence of “systems programming” (as Brooks called it) is one of complexity – an irreducible complexity1 – that of itself cannot be fixed by mechanical solutions. This complexity arises from the fact that software development is a creative, human process. The engineer must, to write a program, conceptualize the problem space correctly and then use the tools at his disposal to create a solution. As projects grow, engineers are added, the consequence of which, as Brooks keenly observed, tends to make the project slower because it increases the number of communication pathways among team members (with every addition), and the conceptual foundation of the project becomes spread among many minds, in ways that are often fragmented and incorrect. This, Brooks argues, is the core problem, and the solution to the problem is to adapt to it rather than try to conquer it.

“Complexity is the business we are in, and complexity is what limits us.”

How does one adapt to the problem of conceptual complexity in software engineering? Brooks proposed a number of solutions.

Conceptual integrity and communication

Brooks proposed that the conceptual integrity of a project – the core ideas about what the problems are and the models used to represent those problems – are of primary importance and must be safeguarded. The most efficient way to ensure this happens is to reduce the responsibility of that integrity to one, or at most, a couple of individuals, that will be responsible for enforcing that conceptual integrity by vetting the work of other team members on the project. They become the source of conceptual truth.

Communication and team structure

Because complexity scales with the number of communication pathways in a team, Brooks proposed that “surgical teams” be used in most software projects. These teams will be composed of the conceptual guardian(s) (the “surgeon”), and as few people as possible to get the work done. These teams are part of an organization as a whole, however, and there is always a management structure with which they must integrate. The key to good management, according to Brooks, is to realize that management is about action and communication. The person at the top should rely on his subordinate program managers to take action when needed, and he should give them the authority to do so. He should never, ever demand action when reviewing a general status report, however, because this will debilitate his program managers, and move the decision making power further from the decisions that needs to be made. Project managers should be concerned almost exclusively with managing the lines of communication in the team, and not with making decisions at all. The whole process of pushing decision making “down” to the program manager is effective because it gives program managers a stake in the total authority of the company, and therefore preserves the total authority of the company.

“The purpose of organization is to reduce the amount of communication and coordination necessary…”

“…the center gains in real authority by delegating power, and the organization as a whole is happier and more prosperous.”

Complexity in code

Complexity can be addressed in the code itself by reducing the mental burden a programmer has to carry while implementing code that has conceptual integrity. In the first edition of Brook’s book, he insisted that all programmers on a team be familiar with all modules (or entities) within a software project. In Brooks’s mind, this was a good way to safeguard the integrity of the system, because everyone would have a working understanding of all code. In a subsequent edition of the book, he backtracked on this position, because it essentially suffered from the mental equivalent of the communication problem. Code changes over time; no programmer ever has a complete and accurate understanding of system because it is not static. Brooks eventually came around to a view promoted by Canadian engineer David Parnas:

“[David] Parnas argues strongly that the goal of everyone seeing everything is totally wrong; parts should be encapsulated so that no one needs to or is allowed to see the internals of any parts other than his own, but should see only the interfaces… [I initially proposed that] Parnas’s proposal is a recipe for disaster [but] I have been quite convinced otherwise by Parnas, and totally changed my mind.”

Information hiding, or encapsulation, allows a programmer to “use” or take advantage of, code, without having to know how it works internally, only how to ask it for something. The mental footprint of understanding an interface (the way to ask code for something) is orders of magnitude smaller than the mental footprint required to understand the implementation behind the interface. And interfaces don’t change nearly as often as implementations (in a well-designed system).

Side-effects (changes created by a piece of code that change things beyond that code, or even system), likewise, should all be identified, well understood, and encapsulated (or eliminated) to reduce the mental burden of worrying about tangential consequences to implementations, which are often causes for bugs, and project delay.

Documentation

Documentation is central to adapting to complexity. Documenting decisions made in a software project is part of fostering the creative process itself:

“…writing the decisions down is essential. Only when one writes do the gaps appear and the inconsistencies protrude. The act of writing turns out to require hundreds of mini-decisions, and it is the existence of these that distinguishes clear, exact policies from fuzzy ones.”

Documentation need not be overly verbose, however; although overly verbose documentation is better than no documentation, Brooks believed. And the documentation regarding technical decisions – design and implementation – should be as close to the code itself as possible (even within the same code files) to ensure the documentation will be maintained and updated as the code itself changes. The goal of documentation should be twofold: 1) to create an overview of the particular system concern the documentation addresses, and 2) to identify the purpose (the why) of the decisions made in regard to that concern. Documentation is not only for other programmers to read; it is often to benefit the original author as well.

“Even for the most private of programs, prose documentation is necessary, for memory will fail the user-author.”

Conclusion

There are many more things I could say about Mythical Man-Month. More subtle insights, longer expositions on points articulated above. But these points are the ones that stuck with me most, the ones I felt were immediately relevant to my own work and career. Mythical Man-Month is one part history, one part knowledge, and one part wisdom. It lets us know that great minds in past struggled with, devised solutions to, and retracted and revised solutions to, the problems that make programming a struggle of genuine creativity.

This last quote is taken from a critique of Brooks, one that he found to be excellent, and expresses the same conclusion (albeit maybe a bit more pessimistic) at which Brooks himself arrived: the central problem of software is complexity, not tools or processes, and that will never change.

“Turski, in his excellent response paper [to No Silver Bullet] at the IFIP Conference, said eloquently: ‘Of all misguided scientific endeavors, none are more pathetic than the search for the philosopher’s stone, a substance supposed to change base metals into gold. The supreme object of alchemy, ardently pursued by generations of researchers generously funded by secular and spiritual rulers, is an undiluted extract of wishful thinking, of the common assumption that things are as we would like them to be. It is a very human belief. It takes a lot of effort to accept the existence of insoluble problems. The wish to see a way out, against all odds, even when it is proven that it does not exist, is very, very strong. And most of us have a lot of sympathy for those courageous souls who try to achieve the impossible. And so it continues. Dissertations on squaring a circle are being written. Lotions to restore lost hair are concocted and sell well. Methods to improve software productivity are hatched and sell very well. All too often we are inclined to follow our own optimism (or exploit the hopes of our sponsors). All too often we are willing to disregard the voice of reason and heed the siren calls of panacea pushers.’”


  1. Not to be confused with the same term used by some creationists to defend their particular ideas about the origin of the universe.

Author’s note: I have removed the two articles I posted previously with extensive quotes from The Mythical Man-Month, because I did unsure whether the number of quotes posted constitute “fair use” or not, and I wish to respect the copyright holder’s interests. I have not been contacted by the copyright owner or received any kind of DMCA letter; this is entirely my own decision. I have used those quotes to formulate the article above.

Protip - return from exceptional conditions early

During a recent code interview, I noticed a React component with a render method written in the following (abbreviated) form,

1
2
3
4
5
6
7
8
9
10
11
12
render() {
return this.state.items.length > 0 ? (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
) : (
''
);
}

where ComponentWithLotsOfProps had at least a dozen props, some of which were not simple primitive values.

While there is nothing technically wrong with this render method, it could be better. It suffers from a few deficiencies.

First, ternaries are objectively difficult to read when they are not short. It is difficult to grok what the method actually produces because the whole ternary is returned, requiring the reader to do double work to find the “implicit” returns (there are two) rather than looking for the easily identifiable return keyword.

Second, one must read the entire method to know what gets returned if there are no items in state. Is it a component? Is it null? Is it an empty string? That is unknown until the whole method has been read.

Third, if additional conditions are required in future work to determine what will be rendered, they cannot easily be introduced in this method.

A better alternative is to omit the ternary, and explicitly return the exceptional condition values first.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
render() {
if (this.state.items.length === 0) {
return '';
}

return (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
);
}

Due to reduced nesting, this is far easier to read, and return values are also easily identifiable. If additional conditions must be evaluated in the future, modifying this method becomes much simpler:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
render() {
if (this.state.items.length === 0) {
return '';
}

if (this.state.items.length == 1) {
return (<SingleItemComponent item={this.state.items[0]} />);
}

return (
<ComponentWithLotsOfProps
prop1={}
prop2={}
propN={}
...
/>
);
}

As with most things in programming: the simpler, more explicit, the better.

Modern JavaScript tooling is too complicated? Hacker News

This post on Hacker News is worth the read, not only for OP’s posted content, but because of the follow-up comments (routinely the better part of Hacker News submission).

You know it’s time for popcorn when the thread starts like this:

…if only the tooling was too complicated, it would not be too bad. IMAO the entire front-end JS world is one big pile of MISERY, complicated is not the word or the problem at all.

Destructuring Reconsidered

While working with React for the last five months, I’ve noticed that React developers make extensive use of object destructuring, especially in function signatures. The more I use React the less I like this trend, and here are a few, short reasons why.

There are countless books by wise industry sages1 that discuss how to write good functions. Functions should do one thing, and one thing only; they should be named concisely; their parameters should be closely related; etc. My observation is that destructured function parameters tend to quickly lead to violations of these best practices.

First, destructuring function parameters encourages “grab bag” functions where the destructured parameters are unrelated to each other. From a practical point of view, it is the destructured properties of the actual parameters that are considered, mentally, as parameters to a function. At least, the signature of a destructured function reads as if they are:

1
function foo({ bar, baz }, buzz) {}

A developer will read this as if bar, baz, and buzz are the actual parameters of the function (you could re-write the function this way, so they might as well be), but this is incorrect; the real parameters are buzz and some other object, which, according to best practice should be related to buzz. But because the first parameter (param1) is destructured, we now have properties bar and baz which are one step removed from buzz, and therefore the relationship between param1 and buzz is obscured.

This can go one of three ways:

  1. if param1 and buzz are related, we do not know why;
  2. if param1 and buzz are not related (but bar and baz are related to buzz) then the function is poorly written;
  3. if bar, baz, param1, and buzz are all closely related, then the function is still poorly written, as it now has three “virtual parameters” instead of just two actual parameters.

Second, destructured functions encourage an excessive number of “virtual parameters”. For some reason developers think this function signature is well written:

1
2
function sendMail({ firstName, lastName, email}, { address1, city, state, zip}, { sendSnailMail }) {}
// function sendMail(user, address, mailPreferences) {}

“But it only has three parameters!”, they say. While technically true, the point of short function signatures is to scope the function to a single, tangible task and to reduce cognitive overhead. For all practical purposes this function has eight parameters. And while the purpose of this function is fairly obvious based on its name, less expressive functions are far more difficult to grok.

Third, destructuring makes refactoring difficult. Sure, our tools will catch up some day. But from what I’ve seen modern editors and IDEs cannot intelligently refactor a function signature with destructured parameters, especially in a dynamic/weak typed language like JavaScript. The IDE or editor would need to infer the parameters passed into the function by examining invocations elsewhere in code, and then infer the assignments to those parameters to determine which constructor function or object literal produced them, then rewrite the properties within those objects… and you can see how this is a near impossible feat. Or at the very least, how even the best IDEs and editors would introduce so many bugs in the process that the feature would be avoided anyway.

Fourth. Often developers must trace the invocation of a function to its definition. In my experience, code bases typically have many functions with the same name used in different contexts. Modern tools are smart, and examine function signatures to try and link definitions to invocations, but destructuring makes this process far more difficult. Given the following function definition, the invocations would all be valid (since JS functions are variadic), but if a code base had more than one function named foo, determining which invocation is linked to which definition is something of a special nightmare.

1
2
3
4
5
6
7
8
9
10
11
12
// in the main module
function foo({ bar, baz}, { bin }, { buzz }) {}

// in the bakery module
function foo(bar, { baz }) {}

// invocations
foo({ bar, baz });

foo(anObject, anotherObject);

foo(1, { bin }, null);

In contrast, functions with explicitly named parameters (usually the signature parameters are named the same as the variables and properties used to invoke the function) make these functions an order of magnitude easier to trace.

Fifth, destructured parameters obscure the interfaces of the objects to which they belong, leaving the developer clueless as to the related properties and methods on the actual parameter that might have use within the function. For example:

1
function handle({ code }) {}

What else, besides code may exist in the first parameter that will allow me to more adequately “handle” whatever it is that I’m handling? The implicit assumption here is that code will be all I ever need to do my job, but any developer will smirk knowingly at the naivety of that assumption. To get the information I need about this parameter I have to scour the documentation (hahahahaha documentation) in hopes that it reveals the actual parameter being passed (and doesn’t just document the destructured property), or manually log the parameter to figure out what other members it possesses. Which brings me to my last point:

Logging. I cannot count the number of times I have had to de-destructure a function parameter in order to log the complete object being passed to the function, because I needed to know some contextual information about that object. The same applies for debugging with breakpoints. (I love when Webpack has to rebuild my client code because I just wanted to see what actual parameter was passed to a function. Good times.)

Don’t get me wrong – I’m not completely against destructuring. I actually like it quite a bit when used in a way that does not obscure code, hinder development, or hamstring debugging. Personally I avoid destructuring function parameters in the signature, and instead destructure them on the first line of the function, if I want to alias properties with shorter variable names within the function.

1
2
3
4
5
6
function sendEmail(user, address, mailPreferences) {
const { firstName, lastName, email } = user;
const { address1, city, state, zip } = address;
const { sendSnailMail } = preferences;
//...
}

This pattern both conforms to best practices for defining functions, and also gives me a lightweight way to extract the bits of information I need from broader parameters, without making it painful to get additional information from those parameters if I need it.

Don’t use the new shiny just because it’s what all the cool kids do. Remember the wisdom that came before, because it came at a cost that we don’t want to pay again.


  1. Clean Code, Code Complete, etc.