0

How much should I factor in potential future developer error when writing code? This works better with a few examples:

switch(something) case 1: return good; case 2: return fine; case 3: return okay; default: thrown new Exception("Should never happen, you probably forgot to add your new case) 

Or we have an Attribute which can only be placed above entity frameworks ForeignKey properties. So if someone puts it above a non-primitive we throw an exception with hey you didnt wanna do that, you put it over the wrong property.

I know some of these switch statements are bad OOP design, but they're there.

I just tried for way too long to write an abstract base class in a way that somehow forces anyone who derives from it, to implement 2 methods, while having those two methods execute base class code at the end. I don't think its possible.

Anyone who derives from a class should know why and be perfectly able to figure this out by himself. Right?

What I'm writing here is not really code for the application.

Client code should be easy. But I find it very difficult to find the line between what's "client code" and what's... "other code". I don't even have a word for it.

My abstract base class was about caching in a repository. The client code did not change. Its still simply myRepository.GetAll(); but the repository itself changed and is less simple.

In a way I'm just trying to anticipate mistakes and be efficient, but in another way I feel like I'm unfairly underestimating other developers. And looking at it pragmatically, as soon as I introduce one cached repository into our code base, even the most junior developer can figure out how to write another one.

I don't really need to provide a perfectly abstracted way to do caching in any repository that requires no knowledge, no code adjustments and works automagically every time. I doubt I can.

Does this concept have a name? How can I better figure out where to stop?

    4 Answers 4

    1

    switch statements with an int value don’t allow the compiler to help you. But assume you use a library that provides an enum, then the compiler knows that all cases are covered, and a “default” case is pointless.

    Now you upgrade the library to a new version with a new enum case. Good news: The compiler tells you exactly what you need to change.

      1

      To summarize a fairly lengthy answer to all the points you've raised:

      • While some of your statements are correct, you are using them to draw faulty conclusions.
      • Over all, your question reads as you building up a justification to not have to put in any more than the least amount of development effort that you have to.
      • Your reasoning seem justified to you, but from historical experience those exact lines of thinking are commonly the root cause of endemic development issues in teams that have neverending bugs, growing technical debt, and a lack of code quality and oversight.

      I've done my best to identify your logical reasoning and point out the fallacies or dangerously misinterpretable statements contained within.


      Does this concept have a name?

      I'd argue that this is a variation on defensive programming, but instead of defending against user actions and unforeseeable events; you're defending against reasonably foreseeable developer error. But it's the same approach overall.


      throw new Exception("Should never happen, you probably forgot to add your new case) 

      Specifically for .NET, this kind of exception is best described by a NotImplementedException.

      NotImplementedExceptions specifically telegraph to the developer that an implementation has not yet been provided, as is the case for your example.


      switch(something) case 1: return good; case 2: return fine; case 3: return okay; 

      If you want to avoid future developer error, then you have to focus on the readability and self-documenting nature of the code you write. Focusing on that, whenever you're dealing with an int-switch, you're generally better off using a more descriptive enum instead.

      While it doesn't quite change how the code works on a technical level (enums are ints under the hood), it significantly improves your code's readability and the ability to remember which specific int values are in use and which are not.


      I just tried for way too long to write an abstract base class in a way that somehow forces anyone who derives from it, to implement 2 methods, while having those two methods execute base class code at the end. I don't think its possible.

      It is possible:

      public abstract class Base { public void MyPublicBaseLogic() { MyProtectedDerivedLogic(); MyPrivateBaseLogic(); } protected abstract void MyProtectedDerivedLogic(); private void MyPrivateBaseLogic() { Console.WriteLine("This is FORCED base logic"); } } public class Derived : Base { protected override void MyProtectedDerivedLogic() { Console.WriteLine("This is derived logic"); } } public class OtherDerived : Base { protected override void MyProtectedDerivedLogic() { Console.WriteLine("This is other derived logic"); } } 

      If you make a new instance of Derived or OtherDerived, you will see that it only has one public method: MyPublicBaseLogic(). When you call this method, it will first execute the derived class' custom logic (defined in the specific derived class), but it will then finish with some forced base logic from the base class (exemplified by the "forced base logic" writeline in this example) which is the same logic since it's defined in Base.


      Anyone who derives from a class should know why and be perfectly able to figure this out by himself. Right?

      This statement starts correctly and ends wrongly.

      There's a difference between "I shouldn't expect this basket to break" and "I should put all of my eggs in this basket". The former is correct, but the latter is the faulty conclusion that a lot of people end up drawing from it.

      "Anyone who derives from a class should know why"

      Yes, it's reasonable to assume that developers know what they're doing when they're doing it.

      "Anyone should be perfectly able to figure this out by himself"

      This is a bridge too far. No, you cannot guarantee that developers haven't made an oversight when doing something.

      Just because you expect developers to be informed does not mean that you should avoid informing them anyway. To prove my point, what you're saying is the same as saying:

      I assume that anyone who drives on this road should be familiar with the traffic situation on this road. Therefore, we don't need to put up any traffic signs. People will already know, right?

      I hope you see the flawed reasoning here. You're assuming everyone knows what you know, which is a lack of foresight.

      Do not use this reasoning to justify not writing documentation or logging. I'm putting that in bold because finding reasons to not have to write documentation or implement clear logging is often the root cause of a team's future technical debt and downward spiral, and the argument you're making here is precisely the response I get whenever I'm brought in to tackle a development team's problems and question why there's no documentation or logging.

      Your question is specifically on how to pre-emptively prevent future errors. But rather than assume you can perfectly prevent them, also assume that errors can still be made (no matter how well you defend against them) and therefore you should clearly document and describe these errors as well.


      I don't really need to provide a perfectly abstracted way to do caching in any repository that requires no knowledge, no code adjustments and works automagically every time. I doubt I can.

      I mean, if you conclusively know that you're not able to do it, then don't do it. That's just common sense.

      However, you not knowing or being able to do something is not a justification for it not needing to be done. "I can't do it, and therefore it's not necessary" is a nonsensical conclusion.

      And looking at it pragmatically, as soon as I introduce one cached repository into our code base, even the most junior developer can figure out how to write another one.

      Factoring in your previous statement, what you're saying here is effectively "I don't need to abstract, others can just copy/paste/adjust my concrete implementation".

      That is the absolute antithesis of good development practices. The promotion of reusability and not having to reinvent the wheel are quitessential developer skills. I would seriously question the value of any developer who actively argues against doing so.

      I have done so in the past. When being brought in to fix a development team's persistent technical debt issues, I usually identified one or more developers who actively preached bad practice (either knowingly or unknowingly), and the way to fix the team's issues was to re-educate those developers, or remove them from their position as the lead developer or a trusted source of quality assurance.


      How can I better figure out where to stop?

      What is correct, however, is that you don't always need to abstract. You're right that there is a line to draw here, where abstraction is no longer necessary and would no longer add value to the codebase.

      My general rule of thumb here is to count like a caveman: one, two, many. When you hit "many", i.e. 3 or more, it has become time to abstract.

      I'm anticipating comments that are going to say "but even 2 occurrences should be abstracted". And those comments are correct. However, they forget to account for human error and overzealous pattern-matching.

      Looking for abstractions is essentially a pattern-matching skill. It's all about spotting a reusable pattern. But just because you spot a pattern doesn't mean that it's a correct pattern to spot. You can overapply this and start seeing patterns that aren't there.

      A simple example here is having a Person and Company class which both have a string Name property. That's a pattern, but does it really need to be abstracted?
      No. Just because these two entities both have a name doesn't mean that their name-related logic is going to be shared or reusable.

      This is why I suggest only looking for abstractions starting from the third occurrence. It's easy to misinterpret two similar things as a pattern, but when a third instance occurs, it's much more likely that there is an actual reusable abstraction happening here. Still not a guarantee, but the odds are much better.

      That being said, common sense still applies. Even though you're only implementing the first (and only) cached repository, if it's very reasonable to either expect (based on common sense) or know (based on future development phases) that multiple repositories are going to be cached in the future, then it makes sense to already create this abstraction from the get go, if you can.

      My rule of thumb only exists to suggest that you evaluate abstractions when three similar occurrences exist. But that doesn't mean that you're not allowed to already evaluate this earlier.

        0

        Changes happen, and making sure that changes will not lead to subtle bugs is generally a good idea. There are various techniques and ideas that can help ensure correct changes:

        • using static typing, which amounts to an automated proof of correctness for limited correctness properties
        • using defensive programming, e.g. adding asserts to make your assumptions explicit
        • good test coverage

        To some degree these are interchangeable. E.g. to ensure that an ID is in a given range one design would be to create a type representing this property:

        class ID { public final int value; public ID(int value) { // we only create an ID object it is valid if (0 < value && value <= 1000) { this.value = value; } else { throw new IllegalArgumentException("ID must be in range (0,1000]"); } } } public Object loadObjectForId(ID id) { ... // ID is guaranteed to be valid here } 

        Or we might use an assert:

        public Object loadOjbectForId(int id) { assert 0 < id && id <= 1000 : "ID must be in range (0,1000]"; ... // ID is guaranteed to be valid here } 

        Or we might write some test cases:

        @Test public void failsIfIdIsTooSmall() { assertThrows(() -> loadObjectForId(0)); } @Test public void failsIfIdIsTooLarge() { assertThrows(() -> loadObjectForId(1001)); } @Test public void loadsObjectsForValidIds() { assertNotNull(loadObjectForId(1)); assertNotNull(loadObjectForId(38)); assertNotNull(loadObjectForId(1000)); } 

        Or we might combine approaches, e.g. testing + asserts. Which approach is best depends heavily on context. E.g. the variant with the assert looks simplest, but might require the assert to be copy&pasted to many places.

        This is not about underestimating developers, this is about planning for change. A lot of software development is about finding ways to prevent project failure despite of change. This ranges from agile ideas like iterative project management or refactoring tooling to ensuring a sufficiently flexible design through design patterns. While static typing, asserts, and tests remove flexibility from the system, they should be used to remove flexibility that would move towards undesirable states. Similarly, we want a bridge across a river to be flexible enough to survive storms and earthquakes, but not flexible enough to collapse. Just like the bridge might have support cables or pylons to keep it upright, we can use ideas like defensive programming to keep the software going despite unexpected events.

        Btw, if you want to enforce that a base class performs an action after an overridden method, the solution can be to use a variant of the template method pattern:

        • add a public final method in the base class
        • let it call a protected method that can be overridden

        For example:

        class Base { public final Result getStuff() { Result r = getStuffHook(); updateCache(r); return r; } protected abstract Result getStuffHook(); } class Specialized extends Base { protected Result getStuffHook() { ... } } 
          0

          Writing code that can not be misused is the main goal of any design. Code that can not be misused results in code that can be maintained, results in less cognitive load and overall more readable code as well.

          Since code will be read potentially multiple times but written only once, basically any effort you spend making it easier to argue about it is a net win. Ideally any sequence of method calls should result in a meaningful behavior. Now, that is not 100% possible in languages with weak type systems like Java and the like. But FP people for example often say "when it compiles it works". That's our goal.

          Annotations have the weakest type support in both Java and C#. One of the reasons I am not a fan, and try to avoid them as much as possible. I also avoid frameworks that necessitate them.

          Inheritance is tricky. In that it has a lot of rules that are very easy to miss, and doesn't give you much flexibility to constrain implementations. So again, avoid.

          I would be grateful for anything that I don't have to think about while using your classes. Not because I couldn't figure it out, I hope I could, but I'm probably using several other classes/libraries. At some point there are just too many things, that's when errors happen.

            Start asking to get answers

            Find the answer to your question by asking.

            Ask question

            Explore related questions

            See similar questions with these tags.