next up previous
Next: Scheduling Overhead Up: Measurements and Results Previous: Throughput Measurements

Interactive Measurements

Beside throughput, another CPU scheduler's responsibility is to achieve good responsiveness. We used the same four classes (gold, silver, bronze and best effort) we created in the throughput experiment in our interactive measurement. During the experiment, the gold, bronze and best effort class have 5 CPU bound jobs running, while silver class has only one ``interactive'' job running. So there are 16 tasks totally. The interactiveness is simulated by having the interactive job sleep for 200ms after busy_waiting for Nms. We define the response time as the wall clock time takes to run the busy_waiting function minus CPU time actually consumed by the function (N). We repeat the test for 50 times and get the max and average. The code below might explain what we do more clearly. In our experiment N varies from 50 to 500.

   for (i=0; i<50; i++) {
           start = get_time();
           busy_waiting(N);
           end = get_time();
           response = end - start - N;
           uleep(200)
    }


Figure 2


Figure 3

As we can see from Figure 4.2, when the Linux scheduler used, the response time of the test application is pretty good while the task is identified as interactive while it degrades sharply when it's not identified as interactive job. Using our CFS, the response time increases early and degrades much slower since we maintain the 30% cpu time guarantee. Figure 4.2 tell us that the cpu time percentage is roughly maintained.


next up previous
Next: Scheduling Overhead Up: Measurements and Results Previous: Throughput Measurements
Haoqiang Zheng 2003-07-22