3 posts tagged with "software"

View All Tags

Don't be a 10xer!

In software, engineers are sometimes referred to as "10xers" - implying that they're capable of producing 10x the productivity of a comparable person. When I was getting started in the industry, this was a bit of a goal for me. I'd like to be perceived as superhuman, and this is a pretty explicit endorsement of that. I'd sometimes work a little (or a lot) extra just to prove that I could do more than my peers. One guaranteed way to 10x is to work 10x the hours.

Over time, I did end up being what I call a "feature 10xer". I could often produce, measurably, 10x the amount of code or features than other people on my team. I wouldn't often produce 10x the reliability, but I'd certainly make the CEO and PMs happy, which was my goal. This also meant sometimes hacking through systemic problems (getting to the root of issues isn't fast). At early stage startups, this seemed desirable, but came with many drawbacks.

At startups with older codebases, this attitude still prevailed, although it usually takes a different form: the "intuition 10xer" is the person who knows the codebase and hacks well enough to deftly navigate through it far faster than (otherwise extremely competent) peers. This means they're the "debugger of last resort", and that they can design features in their head and then implement them without needing to change the design, since they knew all the caveats and patterns before reading code. These people also often have been entrenched in their own decisions for so long that they bear little empathy for new team members. Their intuition is so dialed into the history of this specific codebase that they can't even imagine what it would feel like not to know where everything is - everything is where it belongs, after all!

Both of these roles break down at critical times in the business: when you need to react to growth -- either engineering headcount, or scaling pressure from customer growth. I've observed the following common patterns:

  • The "feature 10xer" creates debt faster than everyone, creating a positive feedback loop of debt. To sustain their relationships and pace, they don't have bandwidth to refine shipped work. As a result, a minimum of 10 people are needed to focus on the reactive work they leave behind. If you're lucky, those 10 people won't need to consult the 10xer to fix things, but eventually this will slow the 10xer down, which is extremely frustrating. I've observed that this is when the feature 10xer transitions into an intuition 10xer, or they leave out of frustration.
  • The "intuition 10xer" becomes a blocker for design and review, bottlenecking all work. You grow the team, the performance issues start coming up, and the system's complexity belongs to an individual. They get frustrated when it takes others 10x the time to fix things, but their desire to "just do it themselves" and fix things faster also slows the team down because they're responsible for reviewing nearly everything, and they're busy fixing things. Everyone sits around waiting for this person, contribuiting to a feedback loop of slowness.

At the end of the day, if one team member is dramatically outperforming the others, you have a huge scaling issue, visible or not. If you manage to survive this phase of your company, pain is on the horizon. All a 10xer does is create 10x the work for others, either now or later. That doesn't mean that having senior people on your team is bad. But it does mean that continuing to have them "go back to the dopamine dispenser" for years after they've become a load-bearing decision maker results in the code becoming less and less understandable to new people as more and more decisions are made in a vacuum by someone with less and less exposure to the outside world. Inevitably, these codebases are extremely expensive to onboard and frustrating to work on, making it difficult to hire and retain new employees, contributing to the bottleneck of these individuals.

Instead, teams should aspire to see (roughly) equal distribution of work, with the senior people spending the "other 9x" on technical debt, documentation, mentorship, developer productivity, and proactive issue/complexity resolution. Arguably the most beneficial side-effect of this arrangement is that the example set by your 10x citizens are that new hires are empowered to begin their journey emulating these activities. You'll occasionally need to encourage newer people to ship imperfect things, but you won't have your 10xers shipping imperfection and your new hires cleaning up their mess - an arrangement that feels good today, but hurts your hiring and SLOs in the long run.

Software Isn't Ever Necessarily Good, No Matter How Old It Is.

I was reading a delightful Joel Spolsky article that was posted to my company Slack today - it was originally published in 2001, and I had read it many years ago. As with most of Joel's articles, I assume I nodded along and filed it away to think about. This time though, I found myself wishing it had included a bit more information.

The article is great, but needs some augmentation. The title of the article is "Good Software Takes Ten Years, Get Used To It" - a summary of the article could be:

  • Historically, software takes a long time (years) to get to 1.0
  • Historically, software growth/adoption is exponential.
  • Historically, software is pretty low quality until it has doubled in size many times, which historically does not happen quickly.
  • Despite a lot of press, software doesn't really become good until 10 years from when someone started working on it.

There's some cause/effect muddling here - for instance it's unclear to me if there's a relationship between growth and quality being implied here, but he does have a large section of helpful advice explaining some counter-productive myths and debunking them ("Business Mistakes"), which is certainly welcome.

However, the premise of the article appears to be that at a certain point in time (10 years, apparently), a piece of software is complete, and no longer can be improved. While I wish that were true, I've never seen it in the wild. So I'd like to propose an alternate theory of software completion:

Software Isn't Ever Necessarily Good No Matter How Old It Is, And In Fact Most Software Shouldn't Even Try To Be Good#

Some definitions are in order. When I say Good, I mean (and believe Joel to mean) that it is software that is serving a useful, necessary, and economically self-sufficient purpose, and that the users of said software are satisfied in the software's performance and functionality such that a large majority of the users would prefer that it did not change dramatically.

I believe that then the time it takes to write "Good" software is porportional to how complex the software is in a "Good" state.

Joel worked on Excel, and I think 10 years is a very reasonable timeline for how long I'd expect Excel to take. And I actually think that if a team of engineers were given Excel in 2001, it may have kept its feature set -- maybe adding a few parity features to take advantage of fast internet speeds and processors -- but generally people would know what it's good at and customer expectations and value could continue to be satisfied.

You may know that with the benefit of history, that isn't really what happened. I suspect that excel's feature list looks a lot like the chart at the beginning of Joel's article, infinitely exponentially exploding in the 20 years since the article was published. In fact, I've worked directly on an excel competitor, and even as a small part of a larger whole of features, it was always the bear when it came to complexity, and many PMs had the opportunity to make it worse by imagining what might be possible with it.

Perhaps this is an issue with Excel as a product (it is, after all, effectively a programming language and computer all rolled into one). But I suspect that we can extrapolate some useful lessons here.

A counter point may be a simple Linux program, which likely could achieve the reliability of Excel 2001 in a much shorter time period. Given the resource consumption of 0, it also is economically self-sufficient. Adoption wise, it would trend with Linux adoption, so on the same curve. I don't have a study for this, but I think our intuition can tell us that outside of how much more money it made than cost, a program like ls or tar roughly follows the same pattern as Lotus Notes, in a fraction of the time - as a result of it being a fraction of the complexity.

Business Mistakes#

All this to say - to refine Joel's thesis, the temporal boundary of 10 years probably isn't that useful. In fact, a math equation probably isn't useful at all, given that the languages, features, and implementations, among other things like the backgrounds of the authors, will impact the raw values when answering "how long will it take to be good". However, there are likely some much more addressible targets to help make good software, that will help answer the "how long" question:

Is your software even capable of being good#

Much software I've seen built in the last 21 years doesn't even have a chance of clearing the bar I set above. It's just there to make money, and is an arbitrage tool - imagine a high frequency trading robot or the Uber app - these pieces of software are entirely bound by the markets they exist in, and cannot possibly be good for any duration of time, given that the markets they participate in are adversarial and changing. There is no feature in uber that would make drivers want to be paid the amount that uber wants to pay them. The competing forces meant the software is in a perpetual state of being bad software. To write software in this context, you'd be wasting your time if you took the time and care on every feature to refine it to being Good. The harsh reality is that more than likely, you'll just be making it harder for the business to succeed by writing Good code. I strongly suspect that a management style of "plan to throw out the codebase every 2 years" would be far more effective in these environments. In observing these types of companies, that often is the result regardless of the plan.

When is your software good#

How will you know if your software is good? Who will decide? What is the last feature you will build? These are hard questions to answer in an exponential growth based economy, but I'd argue that most software these days starts out with ambitions that are not possible on a 10 year time scale. PMs are pretty liberal and young engineers are hungry to ship. As Joel noted, the myth of "Internet Time" has long been propagated - the idea that shipping more often is the same as shipping more/better software is long debunked. Slack ships hundreds of times a day and still barely works. A potential tonic for this problem is to create clear understanding of the goals across the company, which is far harder than it sounds. In fact, if you work in software, what would you consider the "end state" of your software if you could stop time and finish it up? If you can even answer that question, ask your PM or a coworker - did they say the same thing?

In summary - after writing software the entire 21 years this article has existed on the internet, I'd offer this guide on how to write good software:

Determine if your software can even be Good. If so, agree on a simple feature set and improve your software for 2-5 years. At this point, you may have customers. If you do, seek to implement the minimum amount of improvement possible to make them as happy as possible. In an additional 5 or so years, it is possible that your software will be as good as Lotus Notes, one of the best pieces of software written.

However, I don't think most people should even aspire to write software this way. Instead, I'd offer a few alternative paths:

  1. Write less software, or software with less economic requirements. Lotus Notes is extremely complicated, and at its peak was capable of supporting a large number of employees and customers. Find somewhere on the spectrum between Lotus Notes and a simple linux utility that will satisfy your economic requirements and your timeline requirements. Or perhaps just work on something that doesn't need to make as much money. Do you really aspire to work on a team of thousands? You can't, however, build something with high complexity quickly. I suspect this was Joel's original point. Most startups fail because they fundamentally attempt to do more than they possibly can with their runway. Why not do something well instead, and fail because you were wrong that people would like it? At least that way you can write Good software before you fail.

  2. Don't write good software. It's not very profitable compared to imperfect-yet-functional software and if your company is going to scale to hundreds of engineers anyway, your good sofware will drown in features. You'd need 100 years to write Uber with Good software, and in 100 years nobody is going to drive their personal car for 90 hours a week for less than minimum wage, so the software won't be able to make enough money for you to keep it running anyway. This doesn't mean you shouldn't write readable software, or that it's acceptable to not care about the quality of the software you write -- but it does mean that you probably don't need to strive for perfection, and you definitely don't need to write a platform. You can have classes that are thousands of lines long. Optimize for velocity, not quality. Maybe, if you get lucky, people will like your software enough that you can think about spending the next 10 years rewriting your software, this time writing it well.

Your software is temporal, make it fit the team and customers it has now, not an aspirational target in the future. If there's demand for Good software in your space, you'll have 10 years lead time to get started.

Chesterton's Junkyard

Throughout my career, I've seen a lot of old code. Almost every company in Silicon Valley is started by writing some mediocre software and attempting to validate a business model without spending too much time or money on something that works well.

This is fine, and the amount of code that survives usually is relative to how "good" the code was originally - if you have the good fortune of working somewhere with no "founder code", either your founder was on the lower side of the ego spectrum, or their code was especially atrocious. Which is to say, if the product survived at all and you're not in the first cohort of employees, you have probably experienced a phenomenon called "Chesterton's Fence".

The origin of the quote is a passage in Gilbert Keith Chesterton's 1929 book "The Thing". In a chapter called "The Drift to Domesticity", he writes:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.

This "fence" comes up a lot in software, especially old or poorly factored software - and often it gets translated to "if you can't or don't want to dig in to find the original purpose for something, it probably was put there for a good reason, so you should leave it alone". In large and/or high velocity codebases, this gets reduced to "don't delete things you didn't write".

Piles of fences#

Lately, I've been feeling like while I understand the intention behind this quote and find it helpful, I also find it over-applied to the point of becoming a fallacy.

When taken without subtlety, you could extend it to mean:

A person, encountering a thing that actually has no use, will never be able to destroy that thing. In following this, we inevitably retain only the objects that truly have no purpose.

I've been calling the areas afflicted by this symptom "Chesterton's Junkyards" - piles of old, un-owned code, that nobody is empowered to change or refactor, and that arguably at least some of it literally has no purpose.

Cleaning up Junkyards#

The point of this blog post isn't just to complain, but also to propose a reasonable solution: If you encounter one of these junkyards, consider only the contract, and ignore the junk.

If you cannot reduce the offending code to a contract (e.g. it has too many side-effects, or you lack the knowledge to safely distill and replace the contract), at minimum you should file a bug to the effect of "this area of code is a junkyard, and it needs to be cleaned up before it is changed" – effectively encircling the area with caution tape.

In most cases however, these types of haphazard and unplanned pieces of code are not load-bearing. When considering the contract of a given piece of code, you will likely find that the contract is simpler than the code, and possibly even unnecessary or redundant.

The key difference here is that while Chesterton suggests "seeing the use" of something, I'd propose that many things, especially in code, truly have no use; leading us to a true paradox. But if we instead "define the contract" of said fence – which is to say, we just observe its current functionality instead of attempting to understand its purpose – we can much more easily evaluate if it aligns with our design goals (which likely have changed since we erected the fence), and instead of having to dig in and understand something that may not make sense in the first place, just clean up the junkyard.