Open In App

Threads and its Types in Operating System

Last Updated : 03 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

A thread is a single sequence stream within a process. Threads have the same properties as the process so they are called lightweight processes. On single core processor, threads are are rapidly switched giving the illusion that they are executing in parallel. In multi-core systems, threads can execute truly in parallel across different cores. Each thread has different states. In this article, we are going to discuss threads in detail along with similarities between Threads and Processes, Differences Between Threads and Processes.

What are Threads?

Threads are small units of a computer program that can run independently. They allow a program to perform multiple tasks at the same time, like having different parts of the program run simultaneously. This makes programs more efficient and responsive, especially for tasks that can be divided into smaller parts.

Each thread has:

  • A program counter
  • A register set
  • A stack space

Threads are not independent of each other as they share the code, data, OS resources, etc.

Threads allow multiple tasks to be performed simultaneously within a process, making them a fundamental concept in modern operating systems.

Similarity Between Threads and Process  

  • Only one thread or process is active at a time in an operating system.
  • Within the process, both execute in a sequential manner.
  • Both can create children.
  • Both can be scheduled by the operating system: Both threads and processes can be scheduled by the operating system to execute on the CPU. The operating system is responsible for assigning CPU time to the threads and processes based on various scheduling algorithms.
  • Both have their own execution context: Each thread and process has its own execution context, which includes its own register set, program counter, and stack. This allows each thread or process to execute independently and make progress without interfering with other threads or processes.
  • Both can communicate with each other: Threads and processes can communicate with each other using various inter-process communication (IPC) mechanisms such as shared memory, message queues, and pipes. This allows threads and processes to share data and coordinate their activities.
  • Both can be preempted: Threads and processes can be preempted by the operating system, which means that their execution can be interrupted at any time. This allows the operating system to switch to another thread or process that needs to execute.
  • Both can be terminated: Threads and processes can be terminated by the operating system or by other threads or processes. When a thread or process is terminated, all of its resources, including its execution context, are freed up and made available to other threads or processes.

Differences Between Threads and Process

  • Resources: Processes have their own address space and resources, such as memory and file handles, whereas threads share memory and resources with the program that created them.
  • Scheduling: Processes are scheduled to use the processor by the operating system, whereas threads are scheduled to use the processor by the operating system or the program itself.
  • Creation: The operating system creates and manages processes, whereas the program or the operating system creates and manages threads.
  • Communication: Because processes are isolated from one another and must rely on inter-process communication mechanisms, they generally have more difficulty communicating with one another than threads do. Threads, on the other hand, can interact with other threads within the same program directly.

Threads, in general, are lighter than processes and are better suited for concurrent execution within a single program. Processes are commonly used to run separate program or to isolate resources between program.

Types of Threads

There are two main types of threads User Level Thread and Kernel Level Thread let’s discuss each one by one in detail:

User Level Thread (ULT)

User Level Thread is implemented in the user level library, they are not created using the system calls. Thread switching does not need to call OS and to cause interrupt to Kernel. Kernel doesn’t know about the user level thread and manages them as if they were single-threaded processes. 

Advantages of ULT

  • Can be implemented on an OS that doesn’t support multithreading .
  • Simple representation since thread has only program counter , register set, stack space.
  • Simple to create since no intervention of kernel.
  • Thread switching is fast since no OS calls need to be made.

Disadvantages of ULT

  • No or less co-ordination among the threads and Kernel.
  • If one thread causes a page fault, the entire process blocks.

Kernel Level Thread (KLT)

Kernel knows and manages the threads. Instead of thread table in each process, the kernel itself has thread table (a master one) that keeps track of all the threads in the system. In addition kernel also maintains the traditional process table to keep track of the processes. OS kernel provides system call to create and manage threads.

Advantages of KLT

  • Since kernel has full knowledge about the threads in the system, scheduler may decide to give more time to processes having large number of threads.
  • Good for applications that frequently block.

Disadvantages of KLT

  • Slow and inefficient.
  • It requires thread control block so it is an overhead.

Threading Issues

  • The fork() and exec() System Calls : The semantics of the fork() and exec() system calls change in a multithreaded program. If one thread in a program calls fork(), does the new process duplicate all threads, or is the new process single-threaded? Some UNIX systems have chosen to have two versions of fork(), one that duplicates all threads and another that duplicates only the thread that invoked the fork() system call. The exec() system , That is, if a thread invokes the exec() system call , the program specified in the parameter to exec() will replace the entire process—including all threads.
  • Signal Handling : A signal is used in UNIX systems to notify a process that a particular event has occurred. A signal may be received either synchronously or asynchronously depending on the source of and the reason for the event being signaled. All signals, whether synchronous or asynchronous, follow the same pattern:1. A signal is generated by the occurrence of a particular event.2. The signal is delivered to a process.3. Once delivered, the signal must be handled. A signal may be handled by one of two possible handlers: 1. A default signal handler .2. A user-defined signal handler. Every signal has a default signal handler that the kernel runs when handling that signal. This default action can be overridden by a user-defined signal handler that is called to handle the signal.
  • Thread Cancellation : Thread cancellation involves terminating a thread before it has completed. For example, if multiple threads are concurrently searching through a database and one thread returns the result, the remaining threads might be canceled. Another situation might occur when a user presses a button on a web browser that stops a web page from loading any further. Often, a web page loads using several threads—each image is loaded in a separate thread. When a user presses the stop button on the browser, all threads loading the page are canceled. A thread that is to be canceled is often referred to as the target thread. Cancellation of a target thread may occur in two different scenarios:1. Asynchronous cancellation. One thread immediately terminates the target thread.2. Deferred cancellation. The target thread periodically checks whether it should terminate, allowing it an opportunity to terminate itself in an orderly fashion.
  • Thread-Local Storage : Threads belonging to a process share the data of the process. Indeed, this data sharing provides one of the benefits of multithreaded programming. However, in some circumstances, each thread might need its own copy of certain data. We will call such data thread-local storage (or TLS.) For example, in a transaction-processing system, we might service each transaction in a separate thread. Furthermore, each transaction might be assigned a unique identifier. To associate each thread with its unique identifier, we could use thread-local storage.
  • Scheduler Activations : One scheme for communication between the user-thread library and the kernel is known as scheduler activation. It works as follows: The kernel provides an application with a set of virtual processors (LWPs), and the application can schedule user threads onto an available virtual processor.

Advantages of Threading

  • Responsiveness: A multithreaded application increases responsiveness to the user.
  • Resource Sharing: Resources like code and data are shared between threads, thus allowing a multithreaded application to have several threads of activity within the same address space.
  • Increased Concurrency: Threads may be running parallelly on different processors, increasing concurrency in a multiprocessor machine.
  • Lesser Cost: It costs less to create and context-switch threads than processes.
  • Lesser Context-Switch Time: Threads take lesser context-switch time than processes.

Disadvantages of Threading

  • Complexity: Threading can make programs more complicated to write and debug because threads need to synchronize their actions to avoid conflicts.
  • Resource Overhead: Each thread consumes memory and processing power, so having too many threads can slow down a program and use up system resources.
  • Difficulty in Optimization: It can be challenging to optimize threaded programs for different hardware configurations, as thread performance can vary based on the number of cores and other factors.
  • Debugging Challenges: Identifying and fixing issues in threaded programs can be more difficult compared to single-threaded programs, making troubleshooting complex.


Next Article

Similar Reads

close