2011.07.08 - man-made vs. compiler-made optimizations

"do not touch any of these wires!" (taken from http://farm3.static.flickr.com/2180/2514370850_f69c57b339.jpg) since many years i have good laugh from “low level programmers” that do funny things that are hard to read and does not influence performance at all. typical example is x«2 instead of more obvious and readable x*4 – compilers knows that for many years. they now far more, actually…

a friend of mine once published blog entry on guy he worked with, who in review marked code parts like x*4 as “inefficient”, while didn't see a problem with his own code implementing custom, weird O(N3) algorithm to solve task that could be done in O(N*log(N)) by of-the-shelf algorithms…

usually adding strange hacks that are supposed to improve efficiency, does not do much except for obfuscating the code they were supposed to improve. beside that, optimizing the code is rarely a need. and when it is, it's usually less then 1% of the code (often just one, time critical call!), not the remaining 99%, covered with “efficiency improvements”. remember – algorithms first and if it is still not enough, profiling time-critical calls.

today i read in interesting presentation by Felix von Leitner – Source Code Optimization. if you're interested in a case study on low-level-hacks and what compilers can do, read it! you can actually be surprised, that doing hacks that are meant to make code faster, sometimes may break optimizations compiler would have done itself. now add caches, memory access, virtual address spaces, paging, branch predictions, and more…

think about all of this next time you think adding an inline assembly is a good idea…

blog/2011/07/08/1.txt · Last modified: 2013/05/17 19:08 (external edit)
Back to top
Valid CSS Driven by DokuWiki Recent changes RSS feed Valid XHTML 1.0