I’m not smart enough to verify the accuracy of this claim, nor exactly what the implications are, but it seems like it might improve performance if fixed.
I’m not smart enough to verify the accuracy of this claim, nor exactly what the implications are, but it seems like it might improve performance if fixed.
There’s a variable that contains the number of cores (called
cpus
) which is hardcoded to max out at 8, but it doesn’t mean that cores aren’t utilized beyond 8 cores–it just means that the scheduling scaling factor will not change in either the linear or logarithmic case once you go above that number:code snippet
/* * Increase the granularity value when there are more CPUs, * because with more CPUs the 'effective latency' as visible * to users decreases. But the relationship is not linear, * so pick a second-best guess by going with the log2 of the * number of CPUs. * * This idea comes from the SD scheduler of Con Kolivas: */ static unsigned int get_update_sysctl_factor(void) { unsigned int cpus = min_t(unsigned int, num_online_cpus(), 8); unsigned int factor; switch (sysctl_sched_tunable_scaling) { case SCHED_TUNABLESCALING_NONE: factor = 1; break; case SCHED_TUNABLESCALING_LINEAR: factor = cpus; break; case SCHED_TUNABLESCALING_LOG: default: factor = 1 + ilog2(cpus); break; } return factor; }
The core claim is this:
On this point, I have no idea (hope someone more knowledgeable will weigh in). But I’d say the headline is misleading at best.