Phishing for Cows

This is a story about how my company’s corporate security training saved me from buying 5 adorable, fluffy, and completely imaginary cows. By the end of this tale, you will understand how to apply phishing and security lessons to your daily life, and be thoroughly, completely convinced, my dear friends and colleagues, that I am a gullible idiot. 

Setting the Stage

We have a small farm. I currently have 16 goats and in the past we’ve had cows as well as alpacas. I’m always looking for something cute and fluffy to add to the zoo. As it happens I am absolutely obsessed with Scottish Highland cows, and you will understand why when you see the pictures below. 

New Years Eve my wife receives a text from her sister linking to a Facebook post identical to the ones shown below.  She sends it to me, giddy and excited to add what is basically my spirit animal to our farm. (Scottish Highland minis are stout, poorly groomed, and cuddly… just like me!)

The Listing

The listing reads as follows: 

My cows are looking for a new pasture. 
No good reason putting up for sss…aaa…lll…eee. Need to get them off our property, moving out of state soon. 
Halter broke 
Super friendly 
PM me for more info… 
Can arrange delivery…. 
Don’t message unless you’re serious about buying and don’t waste my time 

How could you say no to that adorable face? Naturally, I send the seller a message. But even now, there were signs of trouble I overlooked. 

A False Sense of Urgency 

Our phishing training tells us to be on the lookout for a false sense of urgency. In the corporate world that might look like an email from your CEO telling you to open a file or send information they need to close a deal. 

In this case it looks: 

  • “Need to get them off our property, moving out of state soon.” 
  • “Don’t message unless you’re serious about buying. Don’t waste my time.” 

The seller is telling us, in colorful language, that this deal is for a limited time only. And they’re basically telling us if we want to take advantage of this deal, not to ask too many questions. 

But dang them cows are cute, aren’t they? And so, my excitement clouds my judgement and I proceed. 

Spelling and Grammatical Errors 

Our phishing training tells us to be on the lookout for poor grammar and poor spelling. Sometimes this is the mark of scammers whose primary language isn’t English, or are simply focused on speed and scaling their scams and not focused on a polished message.

In some situations, misspellings are entirely intentional and aimed at defeating spam filters and other protections. 

Now Facebook users are not exactly known for their focus on quality prose, and this post is no exception. But in addition to the lazy spelling and grammar, there’s something that is very much intentional here: 

  • No good reason putting up for sss…aaa…lll…eee. 

Facebook very much tries to push posts for selling items through Facebook Marketplace.

And selling livestock through Facebook Marketplace is against their terms, and such posts are almost immediately caught and removed. As someone who buys and sells livestock from time to time, I know this.  This should have clued me in that the seller was using weird grammar and misspellings to circumvent these protections. 

But dang them cows are cute, aren’t they? And so, I persisted. 

The Seller 

Since apparently, I’m the sort of person that stalks the social media of strangers (i.e., a creep), the first thing I do is click and explore their Facebook profile.  There are a few things that immediately stick out to me: 

  • The profile has not been used in some time, and magically just reappeared to sell some cows. This is a sign it’s a profile that might be compromised
  • Although the seller’s location is Snyder County, PA, we have no people in common.  Generally speaking, I should be able to play Six Degrees of Kevin Bacon and reassure myself I am dealing with a real human, and that I could verify their authenticity by asking my ex-girlfriend’s uncle’s Avon Lady, or something. 
  • There was no evidence on the seller’s profile that they had a farm, or animals. Their listed occupation was “fry cook” and their public profile didn’t suggest they were doing more with life than taking up space in a parent’s basement.

In short, I know that you should only buy and sell from people for whom you can verify their authenticity. I was unable to verify the seller’s authenticity.

But dang them cows are cute, aren’t they? And so, I began a conversation with the mystery fry cook who apparently has a farm in his mom’s basement. 

The Conversation 

I began a conversation with the seller.  He answers my questions in single-word responses and never offers more information than I ask for. Things don’t seem totally off until I’ve satisfied my curiosity about the health and state of the cows, and we start talking price, payment, and delivery. Then things get weird pretty quickly. 

The Haggling 

The seller asked me how I’d like to pay.  Since this is a face-to-face transaction I say I can pay in cash or check Security experts suggest you always use a secure, refundable payment method like PayPal when selling on Marketplace. In my experience when someone is local, verifiable, and doesn’t want to bring Uncle Sam into the transaction, cash is king.

The seller says, in a single word, “no.” 

Okay… that’s weird… My Spidey-Sense begins to tingle.

The seller says, “Can you do Venmo or Facebook Pay?  I can do either.” 

They want me to pay via one the two payment methods I trust the least. Venmo is lousy with scammers and offers little protection to victims. I’d never Venmo anyone I didn’t know and trust in the real world.

But that’s almost irrelevant. I could do either method… but why? I am willing to put cash in their hand the next day. 

I should have stopped right there. 

But dang them cows are cute, aren’t they? And so, I apprehensively step a few inches closer to oncoming traffic. 

The Down Payment 

I tell the seller I can Venmo them the money, but only when we meet in person, and I can physically pet my cows. If we tie the payment to the exchange, that still feels safe.

They tell me, “you will have to Venmo a deposit to have them secured and held for you.” 

Okay… what? Are these cows really flying off the shelf? On New Years Eve? 

If my Spidey-sense was tingling before, I am now milliseconds from feeling the hot sting of the Green Goblin’s festive pumpkin bombs exploding against my delicate web developer skin.

I know from my training that you should never put down deposits on items whose authenticity you can’t verify. Especially through a payment service that isn’t entirely safe.

My lizard brain is now on high alert. It is trying to balance what I now know to be true (I am getting scammed) against admitting to the shame of being duped, the sunk cost of the time I’ve invested, and the joy this transaction was going bring my family. Balance fails. I am beginning to have a panic attack. I can either disappoint my family, or get scammed and still disappoint my family.

But dang them cows are cute, aren’t they? So cognitive dissonance it is, then! 

A scottish highland cow using a computer and realizing he's the victim of phishing.


I’m ashamed to admit it: I actually agreed to send the seller a down payment, but I dictate how much based on how much I was willing to lose.  I tell them I am willing to Venmo $200 as a down payment and no more. They accept. 

I ask the seller what my total bill is. They say $450 per cow. 

By this point I’ve done some research, and I understand that this price is absurd. Scottish Highland Minis sell for thousands of dollars a head.

I can accept that this person is so desperate to unload these cows that they’re willing to sell them at 10-30% of their actual value.

Or, I can conclude that the seller, in the immortal words of Bart Simpson, “don’t have a cow,” and therefore doesn’t actually care about the value of the cow, the selling price, or the size of the down payment. Regardless of the quantity, it’s free money. 

Fortunately the seller solved this problem for me by making misstep.

They sent me their Venmo account. Like the diligence-doing creep that I am, I look at their transaction history and see that it’s a new account, with two transactions: one with a $50 transfer to another account, and a $50 transfer back from that same account. They created an account. They used another account to create some history to appear authentic.

At this point my brain finally relents to the hammer of rationality that’s been trying to crack through for the better park of the day. This account was clearly just created with a specific purpose: to take my money and run.

Then they unsend the message with their Venmo ID and send a new one. I don’t bother to check.  I know what’s going on, I’ve lost nothing but time and self-esteem at this point, and I take my leave. I tell the seller that unless I can come by to see the cows in person, we’re done. 

I won’t give you the gory details but as soon as I mention verifying that the merchandise exists by driving 10 minutes across the county to see them, the conversation goes off the rails.   I block them. I report the post on Facebook.  I tell my family. Tears are shed. 

Mostly mine.

The Conclusion 

After refusing to take multiple logical off-ramps, I eventually found my way to sanity and backed out of this scam before becoming a victim. I was angry at myself for not seeing it much, much earlier. I rubbed salt in the wound by doing a little research to see if I almost fell for a common scam. I had.

  • Doing a reverse image search of the photos the scammer posted to facebook yielded many, many results. The same photos, of the same cows, have been used many times before. 
  • Googling “Facebook cow scam” showed me that this same scam, with almost no change whatsoever, has been making its rounds on Facebook, all across the country, for several years, and the scam has been well-documented by several individuals including actual Scottish Highland breeders. 

What Did we Learn by Phishing for Cows? 

So, what have we learned?

  • Pay attention when an email, message, or social media post attempts to create a sense of urgency.
  • Sometimes poor grammar and spelling is just lack of polish. Sometimes it’s a sign of something more nefarious. And sometimes, it’s an intentional way of circumventing protection based on scanning text for spam, phishing attempts, and other threats.
  • You should only make purchases from online markets like Facebook Marketplace from individuals whose authenticity you can verify. (It’s OK to creep on their public profile.)
  • Always use a secure payment platform, such as PayPal, that offers adequate protection to both the buyer and seller.
  • Avoid payment platforms like Venmo that lack fraud protection unless you’re dealing with someone you know and trust in real life.
  • Don’t make a down payment on something you can’t verify as authentic in real life. You’re gambling that the seller won’t just taking your down payment and scram.
  • If it sounds too good to be true, it probably is. No matter how cute and fluffy.
  • If something feels like a scam, trust your gut and hit eject.
  • Just because someone you know and trust shares something with you online, does not mean that that piece of information is trustworthy.

At the end of the day I lost nothing but time, and a little pride. I lowered my defenses because I was not at work in a high-stakes corporate cybersecurity setting, because the listing was shared by well-meaning people I know, love, and trust, and because frankly I got caught up in the excitement of getting a good deal in something that would have meant a lot to me and my family.

So let’s take a moment to laugh at my expense, and learn from my mistakes.

Okay… you can stop laughing now.

Seriously… am I just a joke to you?

Now you’re just being hurtful.

Tidy First? by Kent Beck

Tidy First? by Kent Beck is a light, easy read about if, when, how, and why to “tidy up” your code.

The “?” in the title is not a typo. Tidy First? is not a how-to: it starts with a question it does not intend to answer. Beck provides a framework built from decades of experience in the craft that you can apply to answering that question for yourself and your own unique situation.

Make no mistake: this is not a book about refactoring. Beck and Martin Fowler have that covered in another book. Tidy First? is a small book about small changes. Refactoring and tidying have a lot of overlap, but Beck defines a tidying as the “cute, fuzzy little refactorings that nobody could possibly hate on.” Tidy First? is about making small structural changes that make behavioral changes easier, and how to make intelligent decisions about when you take time to do it.

The book is divided in three parts: Tidyings, Managing, and Theory.


Tidyings are small structural changes that make behavioral changes (features) easier later. Tidy First? defines and provides examples for 15 different types of tidyings. Some examples:

  • Deleting dead code
  • Adding comments to explain a complicated block of code
  • Deleting explainer comments after you’ve tidied up a block of code, making the comment redundant
  • Moving guard clauses to the top of a function to avoid deep nesting later
  • Normalizing symmetries. In other words: pick a way of doing things, and then do it that way consistently

Kent’s tidyings are pretty uncontroversial. The controversy around tidyings exists because software is written by humans, used by humans, and represents financial value and financial risk to humans too.


The first section of the book addressed how to tidy up. Managing addresses if and when to tidy up.

One could be forgiven for thinking that questions like “should I tidy up?” and “should I tidy up now or later?” are at best philosophical distractions or at worst actively harmful. Should we as experienced, responsible professionals, even entertain the option of not making our code clearer and more resilient to future change?

It turns out that in practice the reality of software being in part a social construct starts chucking rocks at that glass house we were living in. Other people review our code, and may have opinions about whether one pull request should include both structural and behavioral changes. Your team likely has value they are expected to deliver by a certain point, and so you can only spend so much time on the tidying treadmill before you hop off and do feature work.

Beck isn’t so bold as to trade in absolutes. He’s spent a lifetime in the trenches, holding his nose and making trade-offs like the rest of us. Tidy Up? doesn’t give us answers. It gives us strategies for reaching sound conclusions, in our own unique situations. It helps us make decisions about:

  • When to tidy: before behavior changes, after behavioral changes, later, or never
  • When to stop tidying
  • Whether or not to combine structural changes and behavioral changes (put tidying and feature enhancements in the same Pull Request)
  • When and how to batch

Beck’s advice gives you a list of questions to ask yourself to build confidence that, whether you decide to tidy now, later, or not at all, you’re making the right decision for the right reasons. It gives you a framework from which you can work with other well-meaning developers to resolve disagreements about if and when to tidy, and how to organize the work.


Tidyings helps you do your job as an individual contributor. It tells you how to tidy up. Managing frames tidying as something you to as part of a larger team working towards a shared vision of success and technical excellence. It helps the collective you make decisions on if and when to tidy up.

At this point we haven’t answered a really important question: why tidy up?

And as it turns out, “why tidy up?” is a question you should be prepared to answer. If you aren’t prepared to answer why, it’s easy for the Code People to position themselves at odds with the Money People, simply because they didn’t think they should have to explain why tidying actually supports revenue.

As Kent Beck says in the book,

“When geeky imperatives clash with money imperatives, money wins. Eventually.”

Kent Beck, Tidy First?

Beck tries to explain his position using financial markets and options trading as a metaphor. The metaphor flew straight over my thick skull. Fortunately I think I understand his direct arguments, because I’m an engineering manager and I’m constantly listening to the concerns of the Coders and the Money People, and helping all parties find balance.

His base points were:

  • Deliver value early. Delivering working code now is worth more than delivering tidy code later. You can profit off the delivered feature. You can collect direct feedback on how to make it better. So, sometimes tidying later is the pragmatic thing to do.
  • Create optionality. Tidying your code promotes, future change which provides options to do more, faster, in the future. Tidying is necessary to keeping the code ready to accept new features and deliver future value. It supports the money decisions.
  • Practice balance. Go through the mental exercise of making decisions about if and when to tidy up. Tidying is low-risk and low-effort. Doing the exercise will help prepare you for higher-risk, higher-effort decisions and disagreements later.
  • Make reversible changes. When possible, make changes that are easy to reserve. This makes experimentation lower-risk.
  • Coupling drives the cost of software. Coupling means two units of code must change together. Reduce coupling to reduce the cost of change. When tidying, consider reducing coupling. Be on the lookout for tidyings that will turn into rabbit holes that last hours, days or weeks, due to cascading coupling. Maybe tidy that up “after.”


Tidy First? is a book for anyone creating software but especially for those creating software with a team and organization behind it. The tidyings themselves are, in some cases, so obvious as to hardly need a paragraph, let alone a chapter. It is not a how-to, but rather a “how-to think” kind of a book. It provides a way to navigate the choices, conversations, and decisions we all run into while developing software. It helps us understand the “money people” and helps us form arguments that can help the “money people” understand us.

My understanding is that Tidy First? is the first short book in a longer series. I’m excited to see what Kent comes up with next.

What I Got Out of Agile + DevOps East 2023

I’ve never gotten the chance to attend a tech conference in person. Maybe that changes next year. This year, I got the chance to virtually attend Agile + DevOps East.

What I wanted out of the conference isn’t exactly what I got. As a baby engineering manager and aspiring agilist I hoped to gain a better understanding of agile software development and DevOps, and take some lessons home to implement. What I got was a little confidence that I am already on the right path, and a whole lot of prognostication about AI.

But it was useful.

I’ll start with key take-aways that I pulled from the conference, and then move into a summary of each of the talks I attended.

My Key Take-Away’s from Agile + DevOps East 2023

At a high level, this is what I took away from Agile + DevOps East 2023

  • The software industry is losing it’s religion when it comes to agile.
  • We still care about the foundations of agile. Those things the manifesto sought to correct. But people are tired of the systems, and the rigidity. Agile is in it’s reformation.
  • The industry has long-since moved on to DevOps and Continuous integration. Which to me looks a lot like “agility in practice.”
  • Software teams should be self-contained: all the expertise required to deliver working software should be on the team.
  • The AI is coming! But coming to make us more efficient, not replace us.

AI-Powered Agile and DevOps

This talk by Tariq King used, of all things, the evolution of Super Mario Bros. to demonstrate how and why software development processed have evolved to improve quality, efficiency, and flexibility as the the industry mature. Though sometimes the metaphor was a little stretched, it made the point and definitely spoke to my inner 80’s kid who’s still playing Mario games with his kids!

We’re currently in the Agile + DevOps era. While we don’t know what the shape is going to be exactly, we can be pretty certain AI is going to shape what’s next.

But the attributes of current AI make it are tuned to make us efficient, not accurate. We need to use the tools ethically and intelligently to shape positive outcomes.

Key Take-Aways

  • Software models change. We’re at the Agile + DevOps age right now. AI will influence what is next.
  • AI will speed up all the processes in our development lifecycle including product management, requirements analysis, development, testing, and release.
  • But productivity cannot be measured in quantity alone. Productivity combines quantity, efficiency, and quality.
  • We need trustworthy AI tools. Otherwise, we risk using AI to deliver garbage faster.
  • To deliver quality, AI needs to be testable, controllable, observable, and explainable. These are attributes the current iteration of AI lacks.
  • Don’t build on assumptions. Form a hypothesis, test, and verify.

A Minor Point of Disagreement

One of the slides in Tariq’s presentation offered an example of how AI can help make development processes more efficient. A business analyst used AI to convert chat transcripts to user stories.

In my experience that example misunderstands where the valuable work is being done. The real work was facilitating a conversation and asking the right questions to introspect the problem. All the AI did was crunch words into a different format. Which is, in fairness, valuable. It eliminated grunt work. It did not eliminate or even speed up the real work of the analyst.

In my experience, AI has not been great at processing raw transcripts of conversations. Conversations have a pace and cadence that can be hard to parse. People speak in fragments that are clear in the moment but sound fragmented when converted to a raw transcript. There is unspoken information and context in the negative space of the conversation that tools cannot help to capture.

How AI is Shaping High Performance DevOps Teams

Vitaly Gordon’s talk was mostly about measurement. He makes that point that engineering is often the least managed function in an organization (based on context I think he really meant to say least measured). In DevOps, we should be measuring the health and productivity of our team and product (DORA metrics are one example).

In the future, AI can help us measure and improve these metrics.

Key Take-Aways

  • Engineering is often one of the least managed and measured functions of a business.
  • To reduce lead time we should reduce wait time. In other words, measure and identify blockers like slow PR approval, and figure out how to eliminate the blockers.
  • Use automated testing to reduce Change Failure Rate.
  • Use AI to generate more test coverage.

DevSecOps in a Bottle: The Care and Feeding of Pocket Pipelines

Jennifer Hwe’s talk focused on how her team improved security, maintainability, and delivery by bringing DevSecOps practices into an organization with a lot of complexity. Her team was charged with implementing DevSecOps, CI/CD, and containerization on a legacy product that required a focus on heightened security practices, and had to serve multiple teams that were previously working in silos, on separate networks with their own ops and security processes.

Key Take-Aways

  • Innovation was being held back by lack of DevSecOps automation. They couldn’t deliver new features quickly because manual processes held them back.
  • When you plan to implement DevOps or any Dev*Ops variant, you’ll likely cut across various parts of the organization with different cultures, and different opinions on how things should be done. Be prepared to identify and address both technical and cultural challenges.
  • Taka a phased approach.
  • Change is slow. In an organization as large as Northrop Grunman, their transition was measured in years.

Lead Without Blame

This talk by Tricia Broderick felt like the philosophical sibling of Sarah Drasner’s Engineering Management for the Rest of Us. It’s all about the fact that organizations often hire up from the developer pool into management, but does not prepare former individual contributors for their new role. This talk felt like the missing manual.

Key Take-Aways

  • As a technical manager don’t write code because you’re good at it, or because it’s your happy place. That’s not your job anymore.
  • “Sitting together” doesn’t make you a team. That just makes you a group. Health collaboration makes you a team.
  • Individuals can win while whole teams and projects fail. That’s still a failuree.
  • Transition yourself out of the “hub” of operations. You’re not that important. You’ll bottleneck productivity and team growth if you stay there too long.
  • Don’t focus so much on individual accountability.
  • Focus on building team members who are responsible, motivated learners.
  • Conflict good. Drama bad.
  • Further Reading: Lead Without Blame

The Potential of AI and Automated Testing, Conquer Test Script Challenges with AI

This talk by Jason Manning, Nyran Moodie, and Orane Findley was more of a high-level, open discussion about how AI has and will continue to change software testing. They discussed some of the pitfalls we need to be aware of as we build more reliance on AI to build tests and perform automated testing.

Key Take-Aways

  • AI can help you get data-driven metrics about your product (but didn’t really dive into “how”)
  • It may be possible for AI to scan web pages and generate tests for you (again, “how”)
  • We need to consider risks to privacy and security as we plug AI into our products, our tests, and our intellectual property
  • Consider how to use AI without sharing sensitive data or IP
  • At this point, a human needs to be involved in order to ensure the results of AI-driven processes are accurate and secure.

We Got Our Monolith to Move at Light Speed

This talk by Corry Stanley and Marianna Chasnik hit a bit close to home for me. It was all about how they moved a legacy monolith at Discover Financial from a “few releases a year” to a two-week release cycle. Sounds a lot like the journey I’ve been on. Discover succeeded by bringing Ops skills into the product team, using modern tools, infrastructure, and techniques to drive productivity, release faster, and reduce defects.

Key Take-Aways

  • The product team needs DevOps skills built-in
  • Train your whole team in DevOps
  • DORA metrics are lagging indicators of health
  • Treat Pre-production (staging and test environment) failures as production failures. Act accordingly, and act fast.
  • Avoid broken baselines. Use tools and processes line standardized branching models, automated deploys, automated quality tools, automated testing, and branch protection rules to shift quality and validation as early in the process as possible.

The Art of Getting Less to be Faster, Smoother, and Better – Embracing the Agile Principle of Simplicity

Robert Clawson’s talk was near and dear to me as the head of a project that suffers from a legacy of organic, unnecessary complexity. Robert advocated for achieving simplicity and productivity by maximizing the work not done.

Key Take-Aways

  • People and time are finite.
  • Our incentive structures rarely reward subtraction, even though subtraction can be an incredibly intellectual, creative, and valuable endeavor.
  • Features “not worked on” are valuable. It means you saved your resources, or chose to use them to do something with more value.
  • Sometimes removing something is the most valuable thing you can do. Clawon’s example was the K-brick which optimized cost and materials without sacrificing structural integrity.
  • Look for opportunities for reuse. What do you have? How can you reuse or further capitalize on it without adding complexity?
  • Further reading: Subtract, The Untapped Science of Less

AI and the Future of Coding

Christopher Harrison from GitHub gave a refreshingly down-to-Earth talk about the current and future state of AI.

Generative AI is an enhancement to software development that can make us faster, but AI cannnot write full applications, write perfect code, or replace developers.

In experienced developers risk shipping bad code by over-relying on AI and not understanding the results it generates.

Experienced developers driving AI can use it to work faster, reduce the pain and time engaged in unpleasant tasks.

Key Take-Aways

  • Automated code review is coming. But don’t forget about other automated tools like GitHub Actions to automatically check security, code quality, etc.
  • AI can help with unpleasant tasks like writing unit tests.
  • AI can help with uncommon syntax, like figuring out regular expressions.
  • AI can help you rapid prototype and experiment.
  • But AI can’t help you write good code if you don’t already know how to write good code.

Technical Debt for the Nontechnical

If you hang out with programmers long enough, you’re bound to hear one of them vent about technical debt. What is technical debt? Why is it so bad? And more importantly, why should you care?

Let’s begin with that classic cliché we all know and love, the dictionary definition.

Debt: a state of being under obligation to pay or repay someone or something in return for something received.

Merriam-Webster Dictionary

Technical Debt is created when we accept technical trade-offs for a short-term advantage with long-term consequences. Technical debt is a bargain we strike with our future selves. If we don’t want to suffer the consequences, it must be paid back.

I’m Not a Programmer. Why Should I Care?

“I’m not a programmer. I’m in sales, marketing, customer service, the C-suite, or somewhere else. Why should I care about your technical debt techno-babble?”

Why do we write software, and who do we write it for?

For most of us the cold, capitalist answer is we write software to make money for our organization. But that’s an outcome of doing the job well, not the reason we do it. We write software because we’ve identified a problem we can solve for customers. We code to solve problems, delight our users, and keep them coming back. If we do that, the money happens.

You have a stake in all of that too. Other roles in the organization have customer and market insights critical to the software team and the product’s success. If sales, marketing, and the rest of the organization are rowing in different directions, the right customers won’t know we solved their problem, and software won’t succeed. We’re all in this together.

So if we’re not careful, good intentions on the part of the rest of the organization to boost revenue, get a product or feature to market faster, or close a sale can create perverse incentives to take on technical debt.

Even worse: if a nontechnical person asks the technical team to sacrifice “doing it right” in order to “do it cheap” or “do it quick,” if they get what they want and don’t see a consequence, they’re going to keep asking for it.

This works, until it doesn’t.

And so, anyone that can influence software’s direction can create the conditions where technical debt can flourish and lead to failure. That’s why you should care about technical debt.

The consequences of technical debt. "goto" by XKCD

Examples of Technical Debt

Technical debt can be created all sorts of ways. Below are a few examples of what this looks like in practice:

  • Putting sloppy code in production. The code “works” but is poorly written. The customer may get the feature faster but the choice slows down future development because the work is hard to understand, buggy, hard to change, brittle, or inflexible.
  • Progress by Copy-and-Paste. You deliver new features by copy-pasting old ones. In the short-term this may deliver immediate value. But you’ve multiplied the complexity and time required for future changes, enhancements, and bug fixes.
  • Putting inefficient code in production. You know your code hogs resources. The customer may get your changes faster, but their experience suffer from poor performance, and the software may be significantly more expensive to host.
  • Putting code with known security vulnerabilities in production. You know your code has potential security risks and choose to release it anyway. The customer may get the feature faster, but you’ve introduced code that puts all customers, and your organization’s credibility, at risk.
  • Skipping Documentation. You don’t document your code in order to release it faster. In the future, developers have to stop and build an understanding of the “old code” before they can make any changes. Progress slows going forward. If you skip public-facing documentation, you may fail to build institutional and user-level knowledge of changes, missing the chance to educate and advocate for your own product.
  • Skipping Automated Testing. The code “works”… so you think. But you skipped automated testing to release faster. You miss bugs. You introduce regressions in features that used to work before. You eventually find yourself buried in toil: work that is pure overhead, devoid of long-term value, because you chose to skip QA.
  • Building in Toil. The software works but processes that could be automated are built on human intervention. This may help the product or feature release faster. But it introduces friction into the user’s experience of the product and results in a product that can only scale by adding more humans. (And those humans usually require salaries.)

These are examples of technical debt. And to reiterate: some debt is okay, so long as you pay it back.

What Are the Consequences of Technical Debt?

In the financial world, failing to pay your debts has consequences. The bank starts taking stuff, and eventually life starts sounding like a country tune.

A chart illustrating that technical debt causes the cost of change to increase over time.

In software, failing to pay your debts has consequences too. It results in a high Cost of Change. In other words, a product with high technical debt will be harder, slower, and more expensive to build on than the same product with less technical debt. Think of it like inflation: the same dollar buys less new feature development today than it did yesterday.

Here are some examples of what it looks like when software is over-leveraged on technical debt.

  • Too Much Toil. Developers are spending the majority of time engaged in toil: work that is pure overhead and has no long-term value to the organization. But without it, the system eventually grinds to a halt. 
  • Stagnation. If you’ve spent enough time expecting “fast” solutions over “good” solutions, this eventually catches up with you. Your developers can’t get to new feature development because they are buried in fixing bugs.
  • Inefficiency. Adding a programmer to the team doesn’t result in “1 programmer worth of value to the organization.” You’ve just added an additional rower to a rowboat stuck in peanut butter, instead of water.
  • Turnover. You can’t keep talented developers because they want to solve interesting problems, not make a career of cleaning someone else’s code.
  • Declaring Technical Bankruptcy. Your software may become so cumbersome to maintain that the only sane path forward feels like starting from scratch (which has it’s own set of problems).


So sum it up all up: technical debt is created when trades-offs are made to accept worse code in exchange for short-term gains. This can be strategically useful, but only if you honor the promise to pay it back.

Anyone that can influence software decisions can create the conditions for technical debt. We’re all in this together, and should default to promoting mature, sustainable engineering practices over shortcuts taken for short term gains.

Over-leveraging on technical debt has very real consequences that may not surface right away. If you’re progress stagnates because your engineering resources are stuck fixing problems caused by a history of ignoring mature, sustainable engineering practices, you’re probably over-leveraged on technical debt.

But now you know what technical debt is, and what it looks like in practice. You also know how to spot evidence that your organization has taken on too much in the past. Armed with this information, you’ve got the opportunity to help your organization make smarter, more sustainable decisions to reduce technical debt, and avoid creating more in the future.

How to Code as an Engineering Manager (Maybe Don’t?)

Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.

Fake Chaos Theorist and Dinosaur Assault Victim, Ian Malcolm

So, you’re an engineering manager. Your backlog seems overwhelming. You think “what better way to support my team than to pick a ticket and reduce their workload?

You could. But stop for a moment and consider if you should.

Maker’s Schedule vs. Manager Schedule.

You assign yourself a ticket from the critical path.

Then what happens? You start with good intentions. But then you get distracted. In the negative space between meetings, you just barely have time to remember what you did last time. Three weeks later you haven’t completed the ticket.

Not only have you not helped your team, you’ve actually let them down by making an agreement you couldn’t keep, and preventing on-time delivery.

Maker’s Schedule, Manager’s Schedule is as true today as it was on the day it was written. To sum it up: programming takes time and focus. Programmers need the freedom to ignore distractions. In contrast, a manager’s schedule is all about distractions: a project meeting here, a presentation to leadership there, one-on-ones, agile ceremonies, “unsticking” individual contributors.

Every six seconds, a manager somewhere on the planet says, “when am I supposed to get the real work done?

Source: September, 2023 Journal of Fabricated Statistics

Turns out the meetings were the real work all along, sucker!

But I Really Want to Code!

I know, right?

Coding is my happy place. Marking something Done can mean the difference between an emotionally draining day with nothing to show for itself, versus logging out with a sense of accomplishment. Let’s face it: even the worst requirements document still defines Done better than most management responsibilities.

But that’s not a good reason to pick a ticket and risk breaking your team’s agreement to deliver something.

So to scratch your itch, here are some suggestions:

  1. Don’t assign yourself work on the critical path. I’m just repeating this again, in case it didn’t sink in. You mean well, but this is a the road to Hell is paved with good intentions kind of a situation. If you want it released on time, assign it to someone else.
  2. Select low-effort technical debt. Tech debt tasks such as cleaning up a smelly slice of code, refactors, test enhancement, and documentation, are often small tasks you can fit into the margins in your schedule, aren’t blocking others, and will help the team in the future.
  3. Experiment. Is there an internal tool that would improve your team’s experience? Is there a process that could be automated? Go get started! A recent example from my team is a tool I built that monitors Jira, GitHub, and other tools and sends each team member a morning email to remind them of their obligations for the day, like PR assignments. It fit into the margins of my week just fine.

    Just be aware that your team may get jealous if they see you plucking the “interesting” projects and leave them picking up scraps.
  4. Defragment your calendar. If you really want to code, spend some time defragmenting your calendar. Basically, try to rearrange your planned engagements to be patched together with less empty space between them. A successfully defragmented calendar looks like large blocks of meetings (boo!) with large blocks of free space as a result. (yay!) Working for an organization that’s willing to embrace this concept collectively definitely helps.

Now the obligatory throat clearing: this is just a recommendation I’ve found works in my experience. I don’t always get it right. But when the urge strikes to raise my hand and say “I’ll look into that” I reflect on whether or not I actually have the time.

Exception Handling in JavaScript Made Easy

Welcome to the wild world of Exception Handling, where programmers write code that tries to do the right thing and catches itself in the act of doing something else. That’s going to make sense in a few minutes. Trust me.

1. What is Exception Handling?

Exception handling can be a daunting subject, especially to inexperienced developers. Since I don’t know where you’re at in your journey let’s forget about programming for just a minute. In the English language, what does the word exception mean?

Something excepted; an instance or case not conforming to the general rule.

(Exception Definition & Meaning,

So that means an exception is something out of the ordinary. On the negative side of the spectrum, that kind of sounds like an error, doesn’t it?

We can start from a simple, high-level premise: an Exception is just a kind of error, and exception handling is how we deal with them.

2. Exception Handling with the The Try…Catch Statements

Most programming languages that support exception handling do so with a very similar structure: The trycatch statements, and JavaScript is no different.

try {
    // Do some work
} catch (error) {
    // Handle errors

You’ll notice there are two code blocks: a try block and a catch block. One cannot exist without the other: a try { … } must always be followed by a catch { … }. (Note: though this is the most common pattern, technically try can also be followed by finally, but we’ll get to that.)

The Try Block

The try { ... } block tries to run the code within it. If an exception is thrown during execution, the code that follows is not executed.

(Note the use of the verb throw. When an exception occurs, we say the code threw an exception. You may also hear the verb raising an exception too. Throwing an exception and raising an exception mean the same thing.)

There is very little more to say about the try block:

  • The keyword try followed by a bracketed block of code.
  • The code in the block tries to execute. If an exception is thrown, the code after the exception is thrown is not executed.
  • Must be followed by a catch { ... } block, a finally { ... } block, or both.

The Catch Block

A catch block handles Exceptions thrown from the try block that precedes it. It begins with the catch keyword, optionally followed by (error), where error defines the name of a variable that will reference the Exception, followed by a block { ... } containing the exception handling code.

There are a few things to know about the catch statement:

  • It has two forms. You can simply write catch { ... } if you don’t care about the error’s details in order to handle it, or you can write catch (error) { ... } where error is a variable that references the caught Exception.
  • Don’t catch Exceptions you can’t handle. You should only use try...catch to catch errors you can reasonably handle. If you can’t handle the error, then don’t catch it. Let it “bubble up.”
  • Write Tight, Limited try...catch blocks. You should only wrap code in try...catch for which you are prepared to catch and handle Exceptions. Avoid wrapping entire method bodies in a single try...catch. If you do, you risk handling/squashing exceptions you don’t actually want to catch.
  • Re-throw Exceptions you don’t know how to handle. If multiple kinds of Exceptions are possible, inspect the Exception by type (using instanceof), or by name ( === 'DomainError').

Now that we have try and catch, we know just enough syntax to implement exception handling in our code. Here’s a simple example:

 * The following code will throw and catch a RangeError by trying to create an Array with an
 * invalid range.
try {
    // Try to create a new array with an invalid length.
    const array = new Array(-1);
} catch (error) {
    // Check for the kind of error we want to handle.
    if (error instanceof RangeError) {
        // Handle the error by reporting a more useful error.
        console.error('An array cannot have a length less than zero.');
    } else {
        // Rethrow any errors we don't actually know how to handle.
        throw error;

The Finally Block

A finally block provides code that runs after the try and catch phases, regardless of whether an error happened or not. It can be used to perform steps that should always happen regardless of the success of the code in the try or whether an error was throw or caught.

A practical use case is in a graphical application when you want to show a progress indicator or spinner while a process runs, and hide the spinner when it completes regardless of success or failure:

 * The following code shows a spinner, performs a long-running calculation, and then hides
 * the spinner when the process completed regardless of whether or not an Exception was
 * thrown.

// Show the spinner. Assume this function is defined.

try {
    // Assume generateReport() exists, does a bunch of complicated work, and could
    // throw an Exception if something goes wrong.
} catch(error) {
    // ... handle the exception  ...
} finally {
    // Hide the spinner after try and catch complete.

Note: While less relevant in front-end JavaScript code, finally is also very useful for closing resources when a process completes, regardless of success. Think open file pointers, database connections, etc.

3. The Error Class

In JavaScript the Error class is the base for all runtime errors and one of the fundemental building blocks of exception handling. There are a number of sub types of Error including EvalError, RangeError, ReferenceError, and SyntaxError.

You can also extend Error to create our own custom error types. For example, we might use DomainError when we want to throw an Exception caused by a violation of our domain logic. Extending Error looks like this:

 * A DomainError represents an error caused by a fault in business logic.
class DomainError extends Error {
     * Creates a new DomainError.
     * @param {string} message The error message.
     * @param {object|null} options Options that specify the cause.
    constructor(message, options) {
        // Call the base Error class constructor
        super(message, options);

        // Set the name of our Error type = 'DomainError';

The Error class has several standard properties, set via it’s constructor:

  • message – The message that describes why the error occurred.
  • options – An optional object which can specify the cause or the error. (You won’t use this much.)

Depending on the browser Error also supports a bunch of nonstandard properties, which you can learn about here.

Now you know how to create an exception. Creating an Exception doesn’t do anything. You need to throw it. To throw it, use the throw keyword:

const age = parseInt('userInput', 10);

if (isNaN(age)) {
    throw new DomainException('age was not a number, which violates our business rules');

Throwing an exception breaks the flow of execution. Any code that follows the throw statement will not be executed. The exception “bubbles up” through the call stack to the closest catch { ... } block, which will handle the Exception. If the exception isn’t caught, the JavaScript engine will handle it by sending the error to the console.

One other important caveat of JavaScript: you can throw literally anything. It doesn’t have to be an Error. But I don’t recommend it!

 * In the code below we get user input and convert it to a number. If the user input is not
 * a valid number we throw a string as an error.

try {
    // Assume <input id="age"/> exists and we get it's value as a string.
    const ageInput = document.querySelector('#age').value;

    // Parse the input into a base-10 number
    const age = parseInt(ageInput, 10);
    // Use isNaN() to check if the user entered an invalid number.
    if (isNaN(age)) {
        // Look what we're doing: throwing a string instead of an Error. Very legal and very cool
        throw `Some lunatic thinks ${ageInput} is a valid age. Enter a number, you goofball.`;
catch (error) {
    // When an Exception is caught, log an message that tells us the type. It should be "string"
    console.log('An error occurred and it's type was ' + (typeof error));

Let’s review:

  • Error is the base type for all errors in JavaScript.
  • Several subclasses of Error already exist in JavaScript including EvalError, RangeError, ReferenceError, and SyntaxError.
  • We can extend Error to create our own custom error types.
  • Error has two properties: message and options. Both are optional.
  • We can throw anything in JavaScript. But we should throw only Error and Error subtypes.

4. When Do Exceptions Get Thrown?

Exceptions get thrown under the following conditions:

  • You throw an Exception from your own Userland code, using the throw statement.
  • JavaScript throws an exception from an internal function or as a result of some condition your Userland code caused, such as a TypeError, SyntaxError, RangeError, etc.

All errors in JavaScript are thrown as exceptions. When you see an error reported in your browser console, that’s an exception being thrown and eventually being handled by the browser. For example, any of the following simple JavaScript statements throw exceptions that would get reported in the browser console:

// Throws a SyntaxError
JSON.parse('this is invalid json');

// Throws a TypeError because book.metadata is not defined.
const book = { title: 'The Catcher in the Rye' };
const id =;

If you’ve seen an error reported in your console, you’ve already seen Exceptions in action. See? You’re further along than you realized!

6. Exceptions in Asynchronous Code (Promises)

The try...catch...finalize structure is synchronous by nature. This means that try...catch cannot catch and handle an Exception thrown by an asynchronous function calls such fetch(). How would you handle a failure in when calling an API endpoint?

JavaScript has you covered. Asynchronous functions return Promises, and Promises support exceptions via the Promise.prototype.catch() and Promise.prototype.finally() methods.

 * In this hypothetical code block, we first display a spinner to show the application is
 * working.
 * We load a user record from a REST API. If the request fails we throw an exception
 * which we'll catch later. If the request succeeds we convert the body to JSON. If conversion
 * fails it will throw an Exception that we'll catch later. If conversion to JSON succeeds, we 
 * send the user record to hypothetical controller component to handle it.
 * If the request fails or returns invalid JSON, the thrown Exception is caught, logged, and an
 * error is displayed to the user.
 * After the request completes either success or failure are handled, we hide the spinner via
 * finally.

// Show the spinner

// Make REST API request
    .then((response) => {
        // If we don't get an OK response, throw an Exception. fetch does not do this on it's own
        if (! response.ok) {
            throw new Error('API request failed.');
        // Convert body to JSON. Will throw an Exception if not valid JSON
        return response.json();
    .then(user => {
        // Set the user, which will update the view.
    .catch(error => {
        // Log the failure

        // Show an error to the user.
            title: 'API Request Failed',
            message: 'Failed to load data from the backend'   
    .finally(() => {
        // Turn off the spinner.


Congratulations. Now you know how to catch, handle, and throw Exceptions in JavaScript code. You know how to handle Exceptions in asynchronous code via the catch and finally functions built into Promises. In a future article, I’ll be teaching you how Exceptions work in PHP, were things get even more interesting.

Why My Team Chose Web Components Over React

My team recently backed out of a decision to adopt React as the foundation of a product rewrite and chose Web Components instead. This post explains why.


To sum up the situation: my organization had decided that we would do a rewrite of our application. Think Basecamp’s “The Big Rewrite.” Only without Basecamp’s resources.

While words of support for a rewrite were vocalized, the reality that the organization couldn’t live with the feature freeze required to write a new app set in. And so, we change strategies to focus on iterating what we have. But that required us to reconsider key technical decisions that supported the rewrite. Should we still adopt React?

Grafting React onto our legacy codebase is infeasible. Reaching our goals of modernizing our UI and modularizing our code required us to limit consideration to solutions we could use today, with our existing codebase. The obvious choice was Web Components.

Why We Decided Not to Adopt React

I’d like to tell you we’re adopting Web Components because we did some deep, data-driven research that supports the decision. We didn’t.

I’d like to say the Web Component API is so mature and so universally adored that it has knocked React from it’s throne .It’s not, and it hasn’t, and it probably won’t.

More than anything else, I’d like to tell you we made a brilliant decision and it was my expert leadership that got us there. It did not, and there’s a nonzero chance I’m an idiot and you shouldn’t even be reading this. You’ve been warned!

In reality once we reached consensus that a rewrite wasn’t realistic, there was very little decision making to be done. Here’s why.

Our Current Technical Realities

Our existing application is Vanilla JavaScript written and bundled in in a specific order such that dependencies are resolved by order of parsing the code, not any modern concepts like imports, modules, etc.

We could overcome that with a few weeks of refactoring. But then we have to contend with the way the frontend code was originally written, which is to say setup camp in the innermost circle of global scope hell.

Modern tooling, bundlers, frameworks, have expectations of code being well-organized. Ours is not. And as previously discussed, we can’t spend a year or so rewriting it from scratch.

Given the shape of our codebase, attempting to graft on React would be difficult and painful. We would have to fundamentally reshape our application to conform to React’s expectations. And since we’ve already concluded that we don’t have the time or resources for that, React adoption was a non-starter.

Given this new shared reality, what could we do?

Start By Defining the Problem

Let’s face it: developers like React, and it can be easy to have a preferred solution in-hand and try to work backwards from there. In this instance we caught ourselves in the act and reacted accordingly.

We had to start over by redefining our problems and limitations.

What Problem Are We Trying To Solve?

  • We want to adopt a Component Driven Development philosophy and ship modular, reusable UI components.
  • We want to build accessible, attractive, consistent, responsive, and reusable components from which we can compose complex views in our application in the future.
  • Any solution to those problems must “play nice” with our existing codebase so we can introduce our components as evolution, not a single revolutionary rewrite.

Web Components Solve Our Problem

Web Components are not popular. They’re not easy. And they certainly don’t solve all the problems. But they have a lot going for them which make them the perfect solution for us:

  • They’re here. Web components are a native web API that is now well supported in all major browsers.
  • They’re stable. They’re a native web API, which means we can start using them today without worrying about npm dependency hell making life problematic for us in the future.
  • We can use them today. Because they are a native API, and because once you register a web component you simply use it as an HTML tag, we can introduce web components to our codebase today without any major rewrites, refactoring, or changes to our build process.
  • Web Components Help Write Modular Frontend Code. Component is right there in the name. Writing components that implement a generic user interface feature will support our desire to write modular code that separates the view from storage and logic going forward.
  • They Don’t Preclude Using a Framework in the Future. Major frameworks like Angular, Vue, and React are all figuring out how to coexist and support web components. If we reach a point where we can adopt a framework in the future, we don’t have to throw away our work.

Obligatory Welcome Post

Hi! I’m Brian Reich. Formerly of Reich Web Consulting. Currently Software Development Manager at CORE Business Solutions. I’m a technical manager, recovering individual contributor, husband, dad, casual farmer, and some other stuff you likely don’t care too much about.

I hate social media. I love writing. Sometimes it feels good to share my thoughts with other people. So here I am, once again asking you to contribute your time to listen to my brain droppings.

My goal with this blog is to write about things that matter to me. That includes programming, and the art and science of being a technical manager. It includes musings on the state of the world. And it includes talking about my hobbies and passions, like woodworking and gardening.

I’ll do my best to separate different kinds of content so you can find what you care about.