The
computer scientist Donald Knuth described his first encounter with what he refers to as a
jump trace in an interview for ''
Dr. Dobb's Journal'' in 1996, saying: In the '60s, someone invented the concept of a 'jump trace'. This was a way of altering the
machine language of a program so it would change the next branch or
jump instruction to retain control, so you could execute the program at fairly high speed instead of interpreting each instruction one at a time and record in a file just where a program diverged from sequentiality. By processing this file you could figure out where the program was spending most of its time. So the first day we had this software running, we applied it to our
Fortran compiler supplied by, I suppose it was in those days,
Control Data Corporation. We found out it was spending 87 percent of its time reading
comments! The reason was that it was translating from one code system into another into another.
Iteration The example above serves to illustrate that effective hot spot detection is often an
iterative process and perhaps one that should always be carried out (instead of simply accepting that a program is performing reasonably). After eliminating all extraneous processing (just by removing all the embedded comments for instance), a new runtime analysis would more accurately detect the "genuine" hot spots in the translation. If no hot spot detection had taken place at all, the program may well have consumed vastly more resources than necessary, possibly for many years on numerous machines, without anyone ever being fully aware of this. ==Instruction set simulation as a hot spot detector==