Previous | Next --- Slide 52 of 65
Back to Lecture Thumbnails

According to Wikipedia, the OS is made aware that some of the cores it sees are actually Hyper-threads running on the same physical core. It must arrange to run "appropriate" threads on the two cores (e.g. threads from different tasks) for the user to see the best performance improvement.


A neat experiment: If you log into the machines in Gates 5201/5205 and type less /proc/cpuinfo you'll get a report about the processor. The Intel CPUs in those machines have four cores, but hyper-threading is disabled in their current configuration, so the cpuinfo report states there are four cores.

The Intel CPUs in the Gates 3000 machines have six cores, but hyper-threading is enabled. You'll find cpuinfo will report 12 logical cores although there are only 6 physical ones. In the terms I used in class, the CPU has 6 cores, but manages up to 12 independent hardware thread execution contents.


What is the advantage of disabling hyperthreading?


@nslobody I suppose without hyperthreading you would consume less power, specially if applications being ran are not heavily multithreaded, and would not take full advantage of this feature. After all, not every program was written with parallelism in mind :P Other than that, I can't think of a better reason. Maybe from an educational stand point, HT makes it harder to reason about the cores, as they operate slightly differently. Hence, if you were running certain benchmarks on them you would get "unexpected" results.


The idea of hyperthreading is fairly sound, but the implementation isn't perfect. I have heard of people who run servers disabling it. The cores have some shared hardware they can contest.


@nslobody: The benefit of hardware multi-threading is to avoid stalls caused by long-latency operations (or in the case of Intel Hyper-threading: unused instruction issue slots) by executing with work from other threads. One disadvantage of concurrently executing multiple threads on a processor is the potential for interference. For example, imagine a CPU has a 1 MB cache, and each of the threads has a working set that fits in cache. Now consider the case where two threads run concurrently on a multi-threaded version of the CPU. If the aggregate working set of the two threads exceeds 1 MB, the threads will knock each other's data out of cache, and as a result suffer cache misses that they would not have suffered if only one was running on the processor alone. The result might be overall throughput that it worse than if only one thread was run at a time. If a thread's data access caches well, there are few memory stalls to hide and the benefits of hyper-threading are diminished.

Of course, there are also situations where concurrently running two threads on the same core can actually reduce cache misses. If threads share a significant portion of their working set, a load by one thread may bring the data into cache for a subsequent load by the other thread. When the other thread ultimately accesses this data, it will be in cache, avoiding "cold misses" that would exist if the thread was running alone on the core.

Another example of when Hyper-threading may not be desirable is if latency, rather than throughput, matters most to your workload. Hyper-threading results in the sharing of core resources between two threads (including ALUs, not just the cache as discussed above). The need to minimize time to complete the work in one thread, as opposed to maximizing work done per unit time by the core, might lead one to consider disabling hyper-threading.

That said, I do not know the rationale for why Hyper-threading is disabled on the GHC 5201/5205 machines.


This is an excellent article covering some of the implementation details of how Intel hyper-threading works.