since many years i have good laugh from “low level programmers” that do funny things that are hard to read and does not influence performance at all. typical example is x«2 instead of more obvious and readable x*4 – compilers knows that for many years. they now far more, actually…
a friend of mine once published blog entry on guy he worked with, who in review marked code parts like x*4 as “inefficient”, while didn't see a problem with his own code implementing custom, weird O(N3) algorithm to solve task that could be done in O(N*log(N)) by of-the-shelf algorithms…
usually adding strange hacks that are supposed to improve efficiency, does not do much except for obfuscating the code they were supposed to improve. beside that, optimizing the code is rarely a need. and when it is, it's usually less then 1% of the code (often just one, time critical call!), not the remaining 99%, covered with “efficiency improvements”. remember – algorithms first and if it is still not enough, profiling time-critical calls.
today i read in interesting presentation by Felix von Leitner – Source Code Optimization. if you're interested in a case study on low-level-hacks and what compilers can do, read it! you can actually be surprised, that doing hacks that are meant to make code faster, sometimes may break optimizations compiler would have done itself. now add caches, memory access, virtual address spaces, paging, branch predictions, and more…
think about all of this next time you think adding an inline assembly is a good idea…