SJF is a scheduling algorithm that assigns to each process the length of its next CPU burst/execution time. CPU is then given to the process with the minimal CPU burst from the waiting queue. SJF is provably optimal, in that for a given set of processes and their CPU bursts/execution times it gives the least average waiting time for each process. The average waiting time for a process is defined by:
WS=(W1+W2+...+Wn)/n [1], where Wk=Wk-1+tk-1 [2] is the waiting time for a kth process and ti is the execution time/length of next CPU burst of the ith process; 1<= k, i <=n (actually, the execution time of the last process in the queue, tn, does not affect any waiting times), and W0=0.
If we replace [2] into [1], we get: WS=((n-1)t1+(n-2)t2+...+(n-k)tk+... +tn-1)/n [3]
Now let us suppose that we have an arbitrary set of n CPU bursts, { t1, t2, ... , tn }.
The average process waiting time in such a set is given by [3]. If we take from that set two processes, k-j and k, such that tk-j>tk , k>j and switch them, the new average waiting time is:
WS1=((n-1)t1+...+(n-k+j)tk+... +(n-k)tk-j+...+tn-1)/n [4]
If we subtract [4] from [3] we get:
WS-WS1= ((n-k+j)tk-j+(n-k)tk-(n-k+j)tk-(n-k)tk-j)/n = j(tk-j-tk)/n > 0, therefore WS1 < WS.
By repeating the process over and over and putting shorter jobs in front of longer ones, we will eventually completely order the starting set and achieve the minimal average waiting time:
WSof ordered set<...<WS1< WS. Since we started with an arbitrary set, this completes this short and simple optimality proof of the SJF.
The main drawback of the SJF alogrithm is that, of course, in modern operating system enviroments we cannot precisely determine the length of the process' next CPU burst. We can at best approximate it, and that only by a certain degree and in certain conditions.
Andrej Blagojevic,
Computer Science and Engineering student,
School of Electrical Engineering,
University of Belgrade, Serbia.