I recall being taught at the very beginning of Computer Programming 101 about the distinction among compile time errors, run time errors, and logic errors. The definitions are easy to memorize by rote and are easy to accept at face value, however throughout my career I have observed code written in a way which suggests that the deeper implications of these error type distinctions might have been lost on the author. The importance of these distinctions did not sink in for me immediately upon learning them, either.
This might be caused in part by the lack of mention of said deeper implications in any of the educational materials which I can drum up. Why do we care to define these three categories in the first place? There must be some reason other than just dividing errors into three arbitrary categories.
The reason we care is that the compile/run time/logic error taxonomy is not flat, in terms of the grief each type can give you. Rather, the error types typically represent a worsening progression, and this progression is caused by the fact that each error type results in much later feedback to the developer that an error has occurred, than for the error type before it.
Compile-time (syntax) errors are the easiest to deal with. While any such errors are in place, the software cannot be built, nor run, nor deployed for the most part. Many modern IDEs will even notify you of such errors as soon as you type them, without even the need to manually invoke the compilation step.
Run time errors are the second-most problematic, but fortunately they tend to make themselves pretty visible when they occur. The problem with run time errors is that you might need to execute every possible combination of paths in your code, an impossible task using any testing strategy that exists, for all but the most trivial of applications. The best you can hope for is to exercise your code enough that you gain some sense of certainty or confidence in it.
But it should be obvious that you should seek out any possible opportunity to move any potential error from a being run-time to a compile-time error, if possible. Accomplishing this requires a solid understanding of the language in which you are developing, so that you understand the types of checks that your language’s compiler can perform to your benefit. (I should explain at this point that the bulk of my software development experience is with C# a statically-typed, compile-time type checked language. I don’t have enough experience with any dynamic languages well enough to leverage their dynamicness for design benefit.)
So anyway, time and again I have seen developer after developer pay the Static Typing Ceremony Tax, declaring interfaces and classes and inheritance hierarchies, and yet fail to completely reap the benefit that should be gained by paying it.
The final error category is logic errors. These are errors that occur in your application in such a way that the execution of your application continues without interruption. Logic errors are often more insidious, because your application is left functioning, but in an unknown state, and it is usually left up to the user to detect that any error has occurred. In heavily trafficked systems, such as large websites or in web services handling requests from many clients, large amounts of incorrect or corrupted data can be created in a very short amount of time, and often no convenient provisions exist for quick cleanup of this data, so the after effects can be very costly and time-consuming.
I typically see developers pushing what could be compile-time errors to run-time due to what I must assume is a lack of understanding of how to leverage the compilation process to detect errors much earlier and much more reliably. Here is a typical example:
public void AssignAName(IAnimal animal, string name)
if (!(animal is Duck))
throw new ArgumentException();
// assign a name to the duck
Whenever I see code doing this, I can hear Jerry Seinfeld in my head, saying, “You know how to take the reservation, you just don’t know how to hold the reservation,” except changed to, “You know you’re supposed to code against interfaces, you just have no idea why.”
At least the person who created this snippet was thoughtful enough to throw an exception if the type check fails; not everyone is so conscientious. Fearing that some audiences might shut down if I were to start preaching high-brow concepts such as the Liskov Substitution Principle (or the Open/Closed principle for that matter), in some cases I like to just refer to this as violating the “Sandwich Principle.” Imagine if I were to tell you that I’m so hungry, I could eat any type of sandwich in the world! So you kindly offer me a hoagie, which I reject, stating, “I only accept reubens!” This is exactly what this code example is doing. Failing this analogy, I will sometimes try simply explaining, “You’ve made the public-facing interface of your class a liar.” I’ve also jokingly referred to this type of code as fulfilling the Principle of Most Surprise. But enough poking fun …
Ideally, the author of this snippet would have simply made the method accept a Duck only. Therefore, if any other callers to this method attempted to provide any type of argument other than a Duck, the compiler would balk with a descriptive error message. Instead we are left hoping that some form of testing, manual or automatic, will happen to execute any inappropriate arguments being passed to this method.
Suppose that this method were to check the type of the IAnimal argument to see if it is a Duck, a Goose, or a Falcon? If you own the object model yourself, then it might be time to introduce a new abstraction, an IBird perhaps. Why fight against an object model that you yourself (or your team) has created? This seems surprisingly common, however. Read up on Semantic Coupling to learn more. I know that Code Complete Vol II has some good information.
Here is a typical example of code that could cause a run-time error but which has been masked to instead push the issue into logic error territory:
public void DoStuff(arg obj)
if (obj != null)
// do stuff
This is another unfortunately common practice. In this case, for the sake of argument, assume that there is no reasonable case in which the method’s argument should ever be null, but some crafty developer has checked the argument anyway, and has ensured the method silently returns (and therefore, “succeeds”) in any case. Perhaps at one point the developer encountered a situation in which the argument itself was actually null and thought, “A-ha! I know exactly how to solve this,” but the developer never stopped to adequately consider that there was no reasonable case in which the argument ever should be null and therefore, even though it is possible to prevent this method from blowing up, the real problem was elsewhere.
Ask yourself this question: Is it better to disable the “Checkout” button on the shopping cart of your website, or is it better to litter your code with 1,000 null checks everywhere the ShoppingCart object might be used before it has been initialized?
This is but one practice which I put into a category named, “Fixing the line of code on which a problem manifests itself, rather than fixing the broader context of the application in which the actual error case originated.” (The previous code snippet is an example of this same behavior.) After all, the goal is not to prevent exceptions in our software, rather the goal is create software that functions correctly. Sometimes the best way to accomplish this is to proactively notify ourselves as developers, during the development phase that some problem has occurred, or some unexpected state has been entered into, so that we can correct it.
I have a friend and coworker who makes the hilarious comparison between development practices which push errors in the wrong direction to a Three Stooges episode he once saw (“Cactus Makes Perfect“) in which Stooge #1 falls into a cactus, so Stooge #2 pulls him loose and begins pulling cactus spines out of his butt cheek with a pair of pliers, but Stooge #3 takes a pair of scissors and starts cutting the cactus spines level with the skin on the other butt cheek. Don’t be the stooge to cut the cactus spines level with the skin. Completely remove them with pliers instead!
In many cases, choosing to “fail fast” can lead to much more robust software. Here is a quote from an excellent article from ThoughtWorks on the subject of failing fast, which I highly recommend that you read: “Failing fast is a nonintuitive technique: “failing immediately and visibly” sounds like it would make our software more fragile, but it actually makes it more robust. Bugs are easier to find and fix, so fewer go into production.”
So in conclusion, always bear in mind that run-time errors tend to be more problematic than compile-time errors, and logic errors tend to be more problematic than run-time errors. The closer to compile-time you can cause possible errors in your code to manifest themselves, the better off you (and perhaps your software) will be. And always consider that the cost to fix a bug increases rapidly with the amount of time it takes to detect the bug.