Previous | Next --- Slide 14 of 79
Back to Lecture Thumbnails
nmrrs

Branch prediction may have been covered when we took 213, but in case anyone else forgets about it, a branch predictor is essentially a circuit which is designed to guess whether a conditional jump is likely to occur (basically which code block in a conditional statement will execute). When the program starts running, the branch predictor can't make very good predictions, but it can detect patterns as time goes on, for example one branch may be taken 99.99% of the time. If the branch predictor is reasonably certain that a particular branch will be taken, it can tell the processor to begin execution along that branch as if the conditional already evaluated to the result that would favor that code block. However, if the branch predictor is wrong, it can incur an extra cost where the processor needs to undo the partial execution before it can go down the correct branch. Thus it is important to have a very accurate branch predictor or the processor can keep incurring these penalties when it decides to execute instructions before determining the branch for certain.

See here for more info/implementation details

totofufu

@nmrrs How likely is it that a branch will be taken 99.99% of the time though? In my past experience, the number of programs I've seen/written that actually take a particular branch 99.99% of the time approaches 0. What are some non-contrived examples of programs that would actually exhibit branch prediction (or is it acceptable for branch prediction to begin occurring once a branch is taken, say, something like 70% of the time)?

memebryant

@totofufu A simple example of a branch that is taken 99.99% of the time is

for (size_t i = 0; i < 0xDEADBEEF; i++) {
    ...
}

This loop will be taken for 0xDEADBEEF iterations before it fails, so branch prediction will do quite well on it. It may mispredict the first iteration, but the remainder of the iterations other than the last one will be predicted correctly by all of Intel's branch predictors (and I assume other processors use similar models).

It is entirely correct that a conditional branch of somewhat random data can't be predicted this well, but a programmer should look to minimize this type of branch if aiming for performance.

maxdecmeridius

Let me see if I understand this correctly. The branch predictor, out-of-order control logic, and the memory pre-fetcher worked to essentially predict with very high accuracy which instruction the processor would be executing next. Isn't that only really important for ILP, since the processor could then load and execute those instructions while executing the current pre-branch instructions?

hofstee

@totofufu while that may be true, it's not unusual for branch predictors in modern CPUs to predict correctly >90% of the time (for a typical program).

vincom2

@maxdecmeridius I think it's important for everything? For instance, if the branch that's going to be taken doesn't have its code immediately after the goto (so it's not already in cache), and the branch predictor mispredicts, then there'll be a code cache miss and the processor will have to stall waiting for the appropriate code to be loaded from main memory/disk.

fleventyfive

With respect to branch prediction, I strongly urge everyone to go through this awesome answer on stackoverflow: Branch Prediction

Fun Fact: This answer holds the record for the highest upvotes on any answer on stackoverflow!