It is the consequent automation of the "poor man's profiler": You press pause in the debugger and write down inside which function you currently are. If you do this often enough you see where it spends most of the time or how many percent of the time it spends in this function. Oprofile can do this tens of thousands of times within a few seconds and give you very accurate reults.
Do you have any data about how accurate it is compared to other profilers?
My understanding is that a sampling profiler is not the most accurate but I may be wrong.
The more samples you acquire the more accurate it should become, but I have no hard data. I can remember having done some experiments with it to see how far I can go and after a few minutes in a tight loop I could see execution times for individual machine instructions that seemed quite reasonable.
I used OProfile to fix a huge CPU hog in the TurboPower Ipro HTML component 9 years ago that made lhelp slow down to the point of unsability. I remember it was actually really easy to find it, it turned out they used a linear search in a stringlist (it was completely obvious to see in the visualization of the callgraph from the output of OProfile) and I replaced it with a hash map.
The advantage of sampling profilers is that they have almost no effect on the execution of the program, you can attach to a running process without slowing it down, you can even profile a running kernel without disturbing anything. And for quickly identifying CPU hogs the accuracy is usually plenty enough.
I just tried to get OProfile working again an hour ago (after 5 years not using it) and it seems they changed some things (at least in the Ubuntu distribution) and I can't get it to work like I used to (or I am still missing something). But there is also perf, it needs a kernel module and can be installed with apt-get, it comes with it's own GUI (hotspot) to visualize the results, and from a quick look at it it seems it should be able to replace OProfile for most needs.