Evolution of Operating System

evolution of operating systems, evolution of os, operating system evolution

Read later
Save important notes and articles from internet using arusaquotes app.

The evolution of operating systems (OS) can be traced back to the 1950s when computers were first invented. Over the years, operating systems have evolved significantly, adapting to changing hardware and software environments.

Initially, operating systems were simple and designed to provide basic functionality to users, but with the advent of new technologies, the complexity of operating systems increased to support more sophisticated tasks.

Today, operating systems are an integral part of computing and are used in various devices, including desktops, laptops, smartphones, and tablets, among others.

Evolution of OS since 1950 is described in detail in this article. Here, we will discuss six main types of operating systems that have been evaluated over the past 70 years.

evolution of operating system, os evolution, operating system evolution
Evolution of Operating System

Serial Processing

The history of operating systems began in 1950. Prior to 1950, programmers directly interacted with the hardware as there was no operating system available at that time. If a programmer wished to execute a program in those days, the following sequential steps were necessary.

  • Type the program or punched card.
  • Convert the punched card to a card reader.
  • submit to the computing machine, is there any errors, the error was indicated by the lights.
  • The programmer examined the register and main memory to identify the cause of an error
  • Take outputs on the printers.
  • Then the programmer ready for the next program.


This type of processing is difficult for users, it takes much time and the next program should wait for the completion of the previous one. The programs are submitted to the machine one after one, therefore the method is said to be serial processing.

Batch Processing

Before 1960, it is difficult to execute a program using a computer because of the computer located in three different rooms, one room for the card reader, one room for executing the program and another room for printing the result.

The user/machine operator runs between three rooms to complete a job. We can solve this problem by using batch processing.

In batch processing technique, the same type of jobs batch together and execute at a time. The carrier carries the group of jobs at a time from one room to another.
Therefore, the programmer need not run between these three rooms several times.


Multiprogramming is a technique used to execute multiple programs simultaneously on a single processor.

In multiprogramming, several processes reside in the main memory at the same time. The operating system (OS) selects and begins to execute one of the programs in the main memory.

The following figure depicts the layout of a multiprogramming system in which the main memory can hold up to 5 jobs at a time, and the CPU executes them one by one.


In a non-multiprogramming system, the CPU can only execute one program at a time. If the running program is waiting for any I/O device, the CPU becomes idle, which negatively affects the CPU's performance.

In a multiprogramming environment, if a process is waiting for I/O, the CPU switches from that process to another process in the job pool. As a result, the CPU is not idle at any time.


Multiprogramming in operating systems offers several advantages, including:

  • Increased CPU utilization
  • Faster processing of I/O operations
  • Efficient use of system resources
  • Improved system throughput
  • Improved user productivity and reduced wait time.

Time Sharing System

Time-sharing or multitasking is a logical extension of multiprogramming, in which multiple tasks are executed by the CPU by switching between them.

The CPU scheduler selects a task from the ready queue and switches the CPU to that task. When the time slice assigned to a task expires, the CPU switches from that task to another task.

In this method, the CPU time is shared among different processes, making it a 'Time-Sharing System'. Generally, the time slices are defined by the operating system.


The main advantage of a time-sharing system is efficient utilization of CPU resources. It was developed to provide interactive use of a computer system at a reasonable cost.

A time-sharing operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer.

Another advantage of a time-sharing system over a batch processing system is that the user can interact with the job while it is executing, which is not possible in batch systems.

Parallel System

There is a trend multiprocessor system, such system have more than one processor in close communication, sharing the computer bus, the clock, and sometimes memory and peripheral devices.

These systems are referred to as "Tightly Coupled" system. Then the system is called a parallel system. In the parallel system, a number of processors are executing there job in parallel.


Parallel operating systems offer several advantages, including

  • Increased performance: By dividing a task among multiple processors or cores, parallel operating systems can complete tasks faster than traditional single-core systems.
  • Improved reliability: If one processor or core fails, the other processors or cores can still continue working. This increases the overall reliability of the system.
  • Scalability: Parallel operating systems can scale to support more users and higher workloads by adding more processors or cores to the system.
  • Better resource utilization: Parallel operating systems can utilize system resources more efficiently than traditional operating systems, as tasks can be distributed among multiple processors or cores.
  • Increased flexibility: Parallel operating systems can be customized to meet specific needs and can be configured for different types of workloads.

Distributed System

In a distributed operating system, the processors cannot share a memory or a clock, each processor has its own local memory. The processor communicates with one another through various communication lines, such as high-speed buses. These systems are referred to as "Loosely Coupled" systems.


If multiple sites are connected by high-speed communication lines, it is possible to share resources from one site to another.

For example, let's consider two sites, s1 and s2, which are connected by communication lines. Suppose s1 has a printer, but s2 does not. Then, in a distributed operating system, it is possible to access the printer from s2 without physically moving to s1.

Therefore, resource sharing is possible in a distributed operating system.

In distributed systems, a large computer is partitioned into a number of sub-systems, which are run concurrently.

If a resource or system fails in one site due to technical problems, we can use other systems or resources in some other sites. Therefore, the reliability of a distributed system increases.

Recommended Articles

Starvation and Aging in Operating Systems
Scheduling Algorithms in Operating System
Memory Management in Operating System
Spooling in Operating System
What is Deadlock in OS | Detection and Recovery from Deadlock
Threads in Operating System
Understanding Semaphores in Operating System

Learn Android App Development using Kotlin.

Start Learning