There are many rules in the world of programming that are worth following. You probably know a lot of them. The most popular are those with acronyms such as SOLID, DRY, ACID or KISS. However, there are many more. After all, there are also rules such as code consistency or readability. Design patterns can also be thought of as application development principles. The same applies to commonly recognized libraries, frameworks, technologies, interfaces or protocols. There are also rules resulting from static code analysis. Admittedly, it's a lot, and it's not easy to remember it all. I think especially for beginner programmers it can be difficult. When they start working on their first projects, they get a lot of feedback from more experienced colleagues during code reviews. After all, the code they wrote must be added to an existing project, and this must meet recognized standards. It is not difficult to get lost in all this, and even more so to remember to use them all. Often someone then comes up with the idea of writing down all these rules and good practices. I've seen many such lists, but no one has the most important rule of a programmer's work been included. Definitely the most important and obvious. Of course, as is usually the case with principia, it is so important, right and obvious that a lot of people forget about it. However, departing from this principle sinks any project, faster than anyone might think. I've seen many of these. It's still alive, it's supposed to be developed, but with each line of code you can feel the stench of rot more and more. It's hard to even say that such a project is going on. It shuffles. Instead of communicating with the programmer or user, it babbles more and more, and over time it just growls. And it devours. Not just the resources to keep him in the illusion of life, but mostly brains. Programmer Brain Eater: The Zombie Project. You're probably wondering what rule keeps the code alive, and breaking it turns you into a zombie. I will write soon. But let's follow the stage of the disease step by step.
Let's start from the beginning. So we have a healthy code. It may be a baby (greenfield), it may be a kid, it may be a mature project - it doesn't matter. As long as our most important rule is followed, the code is likely to be reasonably healthy. Of course, everyone gets sick from time to time. Some diseases are genetic - inherited from previous projects. There are also acquired defects, and sometimes a more serious disease will attack - sometimes even fatal - but they do not turn our patient into a zombie. So we usually have a relatively healthy project that matures slowly. Sometimes it's fascinated by some new ideology, sometimes we find him drunk (you know, everyone used to be young :), but in general it's not bad.
Our patient develops according to wise principles and, as we have already said, there are a lot of them and new ones are added all the time. Their use is extremely important, because without them, instead of a mature project, we would probably see a twisted mutant. To keep the code in check, we enforce them all. No code review will pass when a deviation is found. Blind adherence to the rules - this is the beginning of the disease! It starts innocently. We have good intentions, don't we? What if the rules conflict with each other? It happens more often than we might imagine. For example, KISS vs complex design patterns, SRP vs DRY, Inheritance vs Liskov Substitution Principle. Of course, these are never de jure contradictions, but de facto it is often problematic. What wins then? Of course, what always happens in ideological disputes - what has a more attractive appearance! I am particularly sensitive to the principle of code consistency. When I see someone start paying a lot of attention to it, a red flag goes off for me. A-ha - it looks like the project is infected with zombieism!
Project's zombieism is like a pestilence. At first, you can't see it, but it develops faster and faster over time. In the early phase, the project can still be saved - the diseased tissue can be cured or cut out and, above all, its cause can be removed. However, if not treated quickly, there is a high chance of death. Sometimes it happens that this disease grows so slowly that it does not manage to kill the patient before he dies of natural causes (e.g. old age). But I don't think it's wise to disregard it for that reason. Already at this stage, it makes life and development of the project more difficult, destroys the ambition and joy of programming in developers and wastes the sponsors' money. The infected project begins to shuffle, groan, growl, stink, and saps the strength of anyone working with it, scaring the entire neighborhood. Euthanasia is not considered simply by affection and lack of alternatives or resources to build them. Of course, everything has its limits.
After a few or at most a dozen years of zombification, someone finally makes the difficult decision to mercifully end the torment of such a project. It is sadly but relieved that it is cut off from the funding drip, the programmers are removed from it and the archive is placed in a coffin in the repository graveyard. However, it is wrong to think that this is the end of the problems with zombieism. Oh no, no - it wasn't the project's fault - it was just an infected patient. Its death does not mean the end of the pestilence. Zombieism can easily infect his successors. Are you wondering why this is happening?
Project zombieism is a potentially fatal, mentally transmitted disease. Its real cause lies in the heads of the carrier-developers. After all, they create and develop projects and they (sometimes) infect them. As I have already mentioned, the beginning of the disease is blind adherence to the rules, but this is not the ultimate cause. Anyway, zombies can also start with bad decisions or misguided solutions. The real cause of this problem is forgetting about the most important principle of programming. And what's that rule?
THINK! Think when programming; think when designing; think when rebuilding; think when implementing and think when planning. One could imagine that people who professionally deal with software development think about what they do all the time. This is a very naive notion. I've seen a lot of programmers who don't think about what they write. Instead of programming, they were simply coding. Thinking while programming means actively wondering about what and why you write - the sense, convenience, practicality and efficiency of the solution. Design principles are valuable guidance and help, but they are no substitute for thinking about the code. In fact, they were created as a result of the thoughts and experiences of generations of programmers, but not in order to use them always and everywhere. Many times, entering a project, I saw nonsense solutions that were notoriously continued. When I ask why, I usually get something like: "You know - we simply do it in such a way" or "Because there is a rule / pattern / scheme to do this". From people who are terrified of thinking, I usually hear "We have to keep the code consistent throughout the project". Well, we don't! Principles should be applied when and where they make sense. Every project is different, has different requirements, technologies, environment and a team that develops it. A program should be written well - and it means something different depending on the context. Typically, it should be efficient, expandable, readable, error-free, and pretty — and it will be if we think about what we do with it. Sticking to all rules will only inevitably contaminate the project with zombieism. I know that it is impossible to think all the time, that you have to rely on the assumptions made, but it is worth verifying these assumptions notoriously. The world is changing very fast - let our heads be changing too!
There are many symptoms of this disease. Let's see some simple examples.
It's a good rule to name factory methods in a meaningful way, so you know what's going on. Such a named method is then preferable to a regular constructor. But why do we need methods like "of" or "from"? What do they bring? Apparently, someone once found out that this is how it should be done and does it without thinking about the sense of such a code:
class Example(val value: String) {
companion object {
fun of(value: String) = Example(value)
}
}
fun main() {
Example.of("What the hell is this function for")
Example("when you can simply use a normal constructor?")
}
Same when I see trivial builders for classes with one field.
This is a very important rule to organize your code and give you hints that a class is badly designed. However, sometimes it is worth exceeding the set limit, instead of creating a monster that is difficult to use.
I once witnessed an unusual conversation. One developer insisted on using the full path to the adapter class in the domain code without import. He knows that importing adapter classes in a domain is against the architecture of ports and adapters. Never mind that just using this class in a domain was actually breaking that architecture. He once heard about the principle of "no imports of adapters in the domain code" and stuck to it without thinking about its meaning at all. To the accusation that it's just masking illegal dependencies, he said that instead of creating a variable, he would use a call chain - then he would "bypass" the import of the illegal dependency. At that point I gave up...
public class Playground {
public static void main(String[] args) {
DomainClass example = new DomainClass();
adapter.AdapterType dependency = example.getDependency(); // hmm, how to hide that we use adapter's dependency?
String value = example.getDependency().getValue(); // simply do not assign it to variable, right? ;)
}
}
The fact that Kafka is used in one project does not mean that it is not worth using e.g. RabbitMQ in another. Or maybe write another microservice in Go instead of JVM? Will JPA be the best way to use a database everywhere? Of course, I don't mean mindlessly using different solutions - then we'll just have a mess. I mean to approach new challenges with a cool head and stay up to date!
One of the most recent examples of zombieism I've seen is manic-compulsive eitherism.
Libraries like Vavr or Arrow are powerful tools - everyone praises them.
Yes, yes - if you don't use such monads, you are lagging :)
— You fetch something from the repository - you return 'either' and push it all the way to the controller!
— But it doesn't make sense...
— You make no sense — 'either' everywhere!
— But how is that different from 'throws' then?
— 'Either'! Use 'either'! That's what we have to do!
— But then we don't use either as it should be used, and we don't use its power at all, and we incur the cost of less readability of the code!
— 'Either'! Grrr! It's supposed to be 'either'! 'Either' always! 'Either' everywhere! They said to use it!
— ...
Finally, it is worth mentioning one good practice thanks to which our code becomes much more understandable and other programmers will not blindly follow us. Justify your choice of a particular rule, practice or pattern! This can be done in a comment, readme file or on a page in Confluence. It is called ADR (Architecture Decision Record) and is a very good anti-zombie prevention. When other programmers read under what conditions a given decision was made, they may be less inclined to follow it blindly. When conditions change, there is a chance that they will rethink the solution again :)