Archive for the ‘Engineering’ Category

Atrophy by Abstraction

In 1957, Isaac Asimov wrote a short story entitled “A Feeling of Power” about a future society where humans had lost to atrophy the ability to perform even simple arithmetic due to a total dependence on computers.  Now I could log volumes on the abysmal state of math knowledge in the humans wandering around in today’s society, but this piece isn’t about that.  It is however along parallel lines for software engineering.

I’ve been doing a fair amount of recruiting for my Engineering team this year and I’m happy to say I’ve hired great people, but it wasn’t easy.  One of the things I like about the process is that I learn a lot.  One of the things I hate is what I sometimes learn, like how many software engineers who have been working in object-oriented technologies for many years who can’t give a lucid explanation of encapsulation or inheritance; and abandon all hopes of polymorphism – log that knowledge as an exception.  These are the core pillars of object orientation and if you can’t at least describe them (much less define them), you can’t use them correctly.

For the most part, I’m not talking about young engineers right out of school, although you’d think they would have forgotten less.  I’m talking about the 5-10 year senior engineer who stares as still as a fallen log hoping that the gods of seniority will suddenly inspire unto them the fundamentals.  And on the subject of “senior” in the title, calling a squirrel a duck doesn’t make it quack.  Admittedly Shakespeare put it more eloquently, “A rose by any other name would smell as sweet.

Another favorite question of mine as my colleagues well know is the binary search.  I often ask engineering candidates, especially server-side and database types, to describe it relative to any other search technique.  Half the time the answer starts by naming certain Java classes – nope pull up, this is not a Java question.  Overall, about 1 in 5 does pretty well.  For the rest, I usually resort to playing a game.

I’m thinking of a number from 1 to 100.  You need to guess it in the fewest number of tries and after each, I will tell you that you’re either correct, too low, or too high.

Almost everyone figures out that the logical first guess is 50.  It has no more of a chance of being right than any other guess, but at least you’re reliably cutting the space in half.  If I say “too high”, then guess 25.  If I then say “too low”, then guess 37, and so on.  That’s a binary search!  Start with a sorted collection and find what you need by successively dividing the search space by 2.

Only once was someone’s first answer not 50 – they guessed 70 and my head exploded scattering debris for miles and miles.

I ask this question because knowing how things work matters.  If you don’t understand the binary search, for example, then you have no idea how an index helps a SQL select or why database inserts and updates are costly when indices are overused.  You may never have to code a binary search ever again thanks to abstraction and reuse, but just because something has been abstracted away from daily view doesn’t mean it isn’t executing at runtime.  Understanding this principle is crucial to being able to play in the high-scale league.

Folding a little math back into my binary search question, I usually ask the following just for fun since only about 1 in 50 come close.  Given a collection of size N, what is the worst case number of tries before you’re sure to win?  More blank fallen log stares as they try to play out the guessing game in their heads, so I lead them down the path.  If N = 2x (i.e., multiplying by 2, x times), then what is the inverse function x = f(N) (i.e., how many times can N be divided by 2)?  What is the inverse of exponent?  But this only helps 1 or 2 more out of 50.

If the many occurrences of the word “log” so far in this post weren’t enough of a clue…

If N = 2x, then x = log2 N

Stealing a bit from Bill Maher and his HBO series, I think we need some New Rules:

  1. All programmers are not engineers.  Programmers write computer programs, maybe even really good ones.  But to be an engineer, one has to know how things work or at least possess the intellectual curiosity to want to know.
  2. Calendars and titles do not make engineers senior.  Few things raise my resume red flags higher than seeing every position since kindergarten as Senior this, Lead that, or Super-Duper something else.  Take the time to learn your craft.  That will distinguish you.
  3. Abstraction without fundamentals is unstable.  It can cause us to mistake tight source code for code that performs well, not thinking about the mass of code sitting in base classes, libraries, and frameworks.  We can write resource-sloppy code and assume the garbage collector will magically clean up behind us.  Try that at scale.

Summing up, abstraction is good.  It has marked the forward movement of software engineering for many decades.  It drives productivity and at least potentially drives better designs, testability, extensibility, maintainability, and lots of other good characteristics.  But it can also give us permission to be lazy.  With or without title qualifiers, good engineers do not let this happen.  They are self motivated to learn and they learn deep, not just broad.

Well, I’ve given away a few of my favorite interview questions here, but if my upcoming candidates suddenly know these answers, I can at least give them credit for reading the interviewer’s blog as preparation.

Advertisements

The Over-Under on Process

As long as there has been a Software Development Life Cycle (SDLC), there have been efforts to devise processes to manage it.  From the excruciating waterfalls of the 1980s (e.g., Mil-Spec 2167), through the OO methodology wars of the 1990s (e.g., OMT, Booch), to the broader processes of the 2000s (RUP, Agile), these processes have evolved along with technologies and business demands.

Various process aspects may be more or less applicable to a particular reality and they are always adapted in some way from the published baseline.  In my last company, we embraced Scrum as being closest to our sensibilities out of the box.  We then augmented the notion of the Product Owner with multiple feature owners recognizing that no one person can expertly represent the constituencies of market trends, immediate customer requests, and the underlying technical issues.

We also had two teams, Application and Platform, each with interdependencies that couldn’t always be split by our full 3-week sprints.  So we concocted a process by which each team executed sprints separately; still 3 weeks, but offset by 1 week to give the Platform team a head-start.  Pros and cons with this, but that’s another post.

The point is that SDLC management processes along with their human and non-human components form complex systems.  Their selection and adaptation must be performed thoughtfully and nothing substitutes for experience here since to a large degree, human behavior will be the make or break factor.

Fundamental Objectives

Any SDLC process worth implementing must achieve certain fundamental objectives irrespective of the underlying technology, the experience of the team, someone’s favorite textbook, the phase of the moon, or the flavor of the month.  In my view, these are they.

  1. Measurability: Coining a famous maxim, you can’t manage what you don’t measure. Metrics may vary from one process to another, but fundamentally a well-defined process enables consistent and comparable measurement of activities so that they can be reviewed dispassionately, tuned, and reported.
  2. Repeatability & Predictability: As in most endeavors, practice makes perfect. The more releases, iterations, or other cycles a team executes, the more efficient that team can become, the closer estimates will align with reality, and the more the process itself can be tuned. With each cycle comes a new set of technical challenges. Procedural challenges should trend toward zero.
  3. Visibility & Transparency: One of the fundamentals of forging a team from a group of individuals is providing them with a fully connected view of the broader scope. Up a level, the Engineering department is a member of a team of departments many of which include direct stakeholders. A good process enables a comprehensible view to its inner workings and the impact of external forces, without which accountability will be a scarce resource.
  4. Decision Context: An urgent customer requirement comes in from left field. Can it be accommodated and what may be impacted (e.g., the release date, other tasks, which ones, etc.)? A good process provides a well-understood context for making hard choices without resorting to throwing food. Not everyone may leave happy, but everyone understands how the decision was made, why it was made, and the benefits and costs it carries.
  5. Comprehensibility: The team can’t execute what it can’t understand and none of these objectives will be realized if team members are following significantly different interpretations. The simpler the process, the more likely its compliance will be true to its intent. Furthermore, staff changes are inevitable. Shorter learning curves yield faster capacity availability.

Notice that I omitted rate of delivery and quality.  Clearly these are factors we all endeavor to maximize.  I would argue, however, that to achieve and sustain these without the foregoing is like trying to speed up a poker game by not looking at your cards.

Potential Pathologies

Processes can turn pathological; conditions where even good qualities are accidentally subverted by being out of balance with other important factors.  Even the most well-meaning process practitioners can find themselves spiraling down the rabbit hole.  Here are a few of my favorites.

  1. Responsibility Transference: Now that we have a process, why burden ourselves with common sense? Processes are like any other system with many moving parts; they need to be initially debugged and then tuned over time. They should never be assumed to be so perfect that the brains of the participants can be disabled. This is like blindly coding to a specification even when errors are suspected assuming that the spec writers must have known what they were doing.
  2. Rigor Mortis: Can’t – move – process – not – letting me. When the house is on fire, don’t wait for a ruling on procedure; just grab a hose. There’s a fine line between adhering to the process and elevating it beyond the product. The process is a tool to meet objectives; it is not the objective in and of itself. Similar to the previous, there are times when common sense really does need to prevail with a logjam review to follow.
  3. Exception Domination: An estimated 60-70% of most source code goes to handling exceptions leaving the minority for primary functionality. An SDLC process rarely anticipates every odd circumstance. If it does, it probably has so many paths as to be incomprehensible to those trying to execute it. Unlike CPU-executed software, missing process paths are a good tradeoff for simplicity. Human collaboration can fill in the gaps.
  4. Illusion of Competence: Certifications such as ISO-9000 and SEI-CMM can be useful when properly applied. Their principles embody years of best practices and refinements. However, these are process certifications, not product certifications. A software development shop can be CMM Level-5 and still produce junk. It is not uncommon, for example, to find offshore shops touting these credentials having only been in business for a year – run for the hills. These are cases where more energy is spent looking like a world-class operation rather than being one.
  5. Numerous Definitions of Done: Is it done? Yes; well, except for testing. Is it done? Yes; um, it just needs to be reviewed and there’s that other thing. Is it done? Yes. Great, so the press release can go out? Well, no it’s being held back a release so we can stress test it some more. The use of the word “done” should be outlawed until its unambiguous definition is signed up for in blood by every member of the team. I have a theory that more project management frustration stems from the misuse of this word than any other singular cause. Done is definitely a 4-letter word.

Summary

Process, good.  Process plus people using their brains and talking to each other, better.  Done.

Holes and Drill Bits

So much of communication is about context.  So much of listening is about knowing where the speaker is coming from.

One of the single most important aspects of engineering practice is the acquisition and translation of requirements, irrespective of the technology or the development process.  This is because errors at this stage affect everything and the parties involved often share a rather thin contextual overlap (e.g., a technologist vs. a business sponsor).  As a technologist, it is especially important to listen actively and get inside the heads of the requirement sources.

My Kingdom for a Drill Bit

Here is an abridged version of a parable I sometimes use during interviews for positions like Architects and Principal Engineers; people who will be faced routinely with the translation of requirements.  The roles:

  • Customer:  Bob, Manufacturing Manager
  • Problem Solving Super Hero:  You

While setting up his manufacturing line, Bob runs into an issue.  Among the various parts that need to be prepped for assembly is a portion of a steel chassis.  It is essentially a steel plate and one of the specifications calls for a 1mm hole in a certain location.  The problem is that steel being steel and given its thickness relative to the diameter of the hole, Bob’s drill bits keep snapping.

Now Bob is a busy guy and doesn’t have time to hunt for harder drill bits; this is but one of many issues on his plate (no pun intended).  He’s sure they exist, but simply has not run into this problem prior and thus has not shopped around.

So Bob comes to you.  He tells you his tale and sends you off to research the latest technology in drill bits while he tends to other tasks.  Because you’re a mechanical engineering aficionado and innate problem solver, you get excited about this challenge.  It’s like a holy grail quest to find the latest thing in drill bits – high tempered, diamond coated, beer flavored, marvels of rotational genius.

After a vein search to find anything substantially better than what Bob already had, you stop and think.  What is Bob requirement; his true need?  Is it really better bits?  No, the man needs a hole.

Thanks for the Hole

There are a number of ways to solve Bob’s problem without drill bits, but that’s not really the point.  Bob clearly has a need; a reliable and repeatable method of putting a 1mm hole into his steel chasses.  However, what Bob communicated was not his requirement, but rather his perceived solution.  Based on what Bob actually said, how could you know the difference?  Understanding where he’s coming from, it’s pretty simple.

As technologists, it’s easy for us to latch onto the technical problem as stated.  Through 4 – 8 years of college, we are handed endless problems to solve, as is.  During our formative professional years, we typically interface with more senior technologists who presumably have already translated the issues from Bob-speak.  But there comes a point in our careers when we need to move beyond the system’s what, when, and how and understand why things work.  We need to get to know Bob and solve problems that are human as well as technical.