I’m sure that most of us in our programming careers have seen something like this…
public static int GetNextSize(int i)
{
//multiply it by four and make sure it is positive
return i > 0 ? i << 2 : ~(i << 2) + 1;
}
Wasn’t it nice that they at least commented the line? Ha. Well, maybe you have not seen something exactly like that, but I’m sure that you have seen something equally as asinine. And I’m also sure that when you saw this the first thing that went through your head was "why would anyone ever write something like this?" And surprisingly, the answer is that they did it because they are a programmer. Well, a programmer who is not in control of their urges. 🙂
Most programmers love to be clever. In fact, we probably got into programming because we loved being clever. We loved writing small, streamlined code that ran faster than anyone thought it could and at the same time confounded all those who read it. The problem is that the programming world is quickly changing and one of my favorite quotes of all time no longer applies:
"Real programmers don’t write comments – it was hard to write, it should be hard to understand."
Now, obviously that never applied and I hope you realized that, but a fundamental shift started happening years ago that has really solidified now. The shift is that the complexity in software has moved from the smaller parts of the applications to the design of the application as a whole, and so the top priority for almost all code should be in how easy it is to read and write, because the systems that they are involved in have become so much more complex.
As applications get larger and more complex, we rarely have to worry about how fast we can read a file in, but we think long and hard about how to make the format of that file easy to parse and flexible. In fact, I remember when XML first came out and there were a good number of people that were predicting its downfall because it was so verbose, but that was actually its strongest selling point. There have been several "Binary" XML specs, but not surprisingly none of them have caught on because they take away the one thing that made XML so attractive, its readability and parseability. The thing that most of those people didn’t understand was that bandwidth would get cheaper, computers would get cheaper and faster, but people’s time will always get more expensive. The more simple and easier to work with (while still being flexible) a standard is, the more likely that it will be adopted. This has held true for HTML, XML, and sometime in the future it may even hold true for REST web services.
But as programmers these same rules apply, no longer do we get any credit for writing a routine that parses a file 5 percent faster, especially not if it makes the code harder to read, longer to write, longer to debug, or harder to test. No longer does a programmer get respect for using pointer arithmetic and bit-shifting in order to speed up math operations. These kind of optimizations (for the most part) are just pointless exercises that are often written for nothing more than the programmer’s ego. (And before you guys start ranting about graphics programming or physics modeling, yes, I do realize that there are places in the world for code like this) But that does remind me of one of my favorite quotes from "Code Complete" by Steve McConnell:
"Programming is not like being in the CIA, you don’t get credit for being sneaky."
(It took me a little bit to find that, my copy of "Code Complete" has more highlights than Paris Hilton’s hair)
So, if we don’t get credit for writing clever code anymore, then what do we get credit for? Well, we get credit for making our code simple and easy to understand. We get credit for loose coupling, for testability, for shallow hierarchies, for good exception handling, for fluid interfaces, for not giving our co-workers headaches, essentially for all around good design. We get credit for being good software engineers, not being winners of the International Obfuscated C Code Contest (although you will get a certain kind of credit for that, but you get my point).
Now hopefully anyone in their right mind would have written the method at the beginning of this post like this…
public static int GetNextSize(int i)
{
return Math.Abs(i * 4);
}
Without caring one bit about performance (of which I honestly don’t know or care what the implications are at this point) I can say with 100% confidence that this code is better. And why? Because I can immediately look at it and tell what it is doing. Even without the comment it is more readable than the code at the top that has the comment, and that is what we want, self-documenting code. Because as the Pragmatic Programmers pointed out, you don’t want to have to comment on what the code is actually doing, you only want to comment on the assumptions that were made in order to write the code. Information that cannot be derived just by reading the code itself. The only problem is that the more complex our code gets, the more and more we need comments just to explain what the code is doing.
And don’t think that I am saying that there is no place for optimization, that is not true, there is just less room for optimization in the source itself, but there is lots of room for optimizations in terms of overall system performance and in terms of developer productivity. It is more important to focus on the big picture and solve performance problems that are system wide, or refactor code so that changes can be made much faster, than it is to solve a performance problem in a single line of code…unless of course that line of code is being called 8 million times by 50 parts of the system.
So next time you go to write a super clever line of code, think to yourself "Will the benefits of this super cleverness be outweighed by the future issues in maintaining and understanding the code?" And if there is any hesitation at all, then you better not be clever, because 3 months from now you will come across that code and say "What the hell was I thinking?" Then you’ll end up rewriting it anyway.
And one final note, if you are the kind of person who just has to write super clever looking code that no one can make sense out of, then they have a language for you, it is called Perl. 🙂 (let the flames begin)
Loved the article? Hated it? Didn’t even read it?
We’d love to hear from you.
Too true, code is read more times than it is written.
Your comments on performance ring true as well. I work amongst a team of pre-mature optimizers, all too eager to squeeze perceived efficiency out of every algorithm they write – readability be damned.
With C#3, i also suffer some bad readability tendencies. I probably overuse "var", use Dictionary<object,Func> instead of switch statements and use lots of 1-line conditionals.
@Vijay – First of all, I love your Gravatar. And yes, I agree that in C#3 it is going to get a lot easier to write "clever" code with Linq, lambdas,var, etc… A lot of it is just a learning curve on the syntax though, and not necessarily making it complex. Something like this may look complex to a person unacquainted with the syntax:
MyObj.DoSomething(x => x + 2);
But it is actually very simple if you know the syntax. Although var will be a particular problem, especially when people use it for values that come from method calls and other items that don’t show an explicit type. For instance:
//good – i reduced typing
var variable = new MyVarType();
//bad – whats my type?
var variable = MyMethodCall();
And 1-line conditionals, as long as they are kept short, I think can improve readability. But as far as Dictionary<object,Func> I have not used that one yet, but I can see how that may have some latent cleverness. 🙂
Great post, I agree 100%. I always prefer to write more readable code instead of trying to optimize up front or show off how few LOC it takes me to achieve something. Granted, sometimes chaining can result in code that’s easier to read, since you avoid a lot of temporary variables. But anyhow…
Also, slight nitpick — Doesn’t
i > 0 ? i << 2 : ~(i << 2) + 1;
yield a different number for i = 2 vs i = -2? I.e. 8 vs 9?
Nope, it does not return different numbers for 2 versus -2. The reason for this is that .net stores numbers as a two’s complement. This is where a negative number is represented by taking 2 to the power of the N (where N is the number of bits in the number, in our case 32 bits), and then subtracting the positive representation of that number from this.
A quick way to do this (which is what I did in the above code) is to take a bitwise negation of that number and then add 1 to it. So, if we had a number like 5 (0101) then we would take a bitwise negation (1010) and then add 1 (1011). This is the same as taking 2^4 (16) and subtracting 5, which would give us 11 (1011). Hopefully that is pretty clear, if not, please feel free to ask any questions.
A brave post (and true) !!
It depends. Engineering is the art of compromise. On a desktop machine running at X Ghz, I agree – for most purposes it isn’t worth trying to squeeze out that last few percentage points of performance – most applications will be I/O bound anyway. The flip side of this is an application I’m working on right now – which is strewn with things like this – including a fully unrolled 16 bit X 16 bit multiply. Why? Because I’m targeting a processor with a max clock rate of 20mhz, and the slower I can clock the processor, the longer the batteries will last in the device.
You’re trading off processor to get maintainability. Very often, that’s a good bargain – but you still have to remember that when it’s all said and done the compiler has to turn this into machine code. For a desktop application, the extra ~two dozen clocks probably don’t matter. You probably don’t need to write your program with pascal strings. Inline assembly isn’t even on the radar. What you can’t do is ever forget that it’s a compromise.
@Jacques: Very good point, my world is mostly stuck in business software and web development. So I certainly have different issues to deal with than you do.
In my view, whether code is too clever depends entirely on the level and type of experience that you expect those viewing your code to have.
I wouldn’t expect a C programmer to have any problems with:
while(*p++) { DoSomething(); }
and I wouldn’t expect a good theoretical computer scientist to have any problems with:
return i > 0 ? i << 2 : ~(i << 2) + 1;
The problem comes when people expect junior programmers, or not-really-programmers to understand this sort of code.
If you’re writing a physics engine, you can be pretty ‘clever’. If you’re writing code you expect a junior web-designer to understand, you have very little flexibility.
Even I understand your point, but I have to say that I like clever solutions.
I always tempted to find clever solution, and try to avoid "Brute Force"
My favorite quote on this subject is by Brian Kernighan. “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?”
So true, and it is a big mindset change from “clever” to “un-clever”.