Inter Process Communication in OS

Inter Process Communication in OS:- Here we provided Inter Process Communication in OS in Realsubtitle. Processes executing concurrently in the operating system may be either independent processes or cooperating processes.

 

Inter Process Communication in OS

A process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process. There are several reasons for providing an environment that allows process cooperation:

Information sharing:- Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information.

Computation speedup:- If we want a particular task to run faster, we must break it into subtasks, each of which will be executed in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing elements (such as CPUs or I/O channels).

Modularity:- We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads.

Convenience:- Even an individual user may work on many tasks at the same time. For instance, a user may be editing, printing, and compiling in parallel.

Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data and information. There are two fundamental models of interprocess communication:

(1) shared memory and

(2) message passing.

In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes.

Both of the models just discussed are common in operating systems, and many systems implement both. Message passing is useful for exchanging smaller amounts of data because no conflicts need to be avoided. Message passing is also easier to implement than is shared memory for intercomputer communication. Shared memory allows maximum speed and convenience of communication. Shared memory is faster than message passing, as message-passing systems are typically implemented using system calls and thus require the more time-consuming task of kernel intervention. In contrast, in shared memory systems, system calls are required only to establish shared-memory regions. Once shared memory is established, all accesses are treated as routine memory accesses, and no assistance from the kernel is required.

 

Shared-Memory Systems

Interprocess communication using shared memory requires communicating processes to establish a region of shared memory. Typically, a shared-memory region resides in the address space of the process creating the shared memory segment. Other processes that wish to communicate using this shared memory segment must attach it to their address space. Recall that, normally, the operating system tries to prevent one process from accessing another process’s memory. Shared memory requires that two or more processes agree to remove this restriction. They can then exchange information by reading and writing data in the shared areas. The form of the data and the location are determined by these processes and are not under the operating system’s control. The processes are also responsible for ensuring that they are not writing to the same location simultaneously.

 

Message-Passing Systems

Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space and is particularly useful in a distributed environment, where the communicating processes may reside on different computers connected by a network. For example, a chat program used on the World Wide Web could be designed so that chat participants communicate with one another by exchanging messages. A message-passing facility provides at least two operations: send(message) and receive(message). Messages sent by a process can be of either fixed or variable size. If only fixed-sized messages can be sent, the system-level implementation is straightforward.

 

Here are several methods for logically implementing a link and the send()/receive() operations:

  • Direct or indirect communication
  • Synchronous or asynchronous communication
  • Automatic or explicit buffering

 

Naming

Processes that want to communicate must have a way to refer to each other, They can use either direct or indirect communication. Under direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication.

With indirect communication, the messages are sent to and received from mailboxes, or ports. A mailbox can be viewed abstracity as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification.

 

Synchronization

Communication between processes takes place through calls to send() receive primitives. There are different design options for implementing each primitive Message passing may be either blocking or nonblocking also known as synchronous and asynchronous.

Blocking send:- The sending process is blocked until the message is received by the receiving process or by the mailbox.

Nonblocking send:- The sending process sends the message and resumes operation

Blocking receive:- The receiver blocks until a message is available.

Nonblocking receive:- The receiver retrieves either a valid message or a null

 

Buffering

Whether the communication is direct or indirect, messages exchanged by communicating processes reside in a temporary queue Basically, such queues can be implemented in three ways:

Zero capacity:- The queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives the message.

Bounded capacity:- The queue has finite length n; thus, at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue (either the message is copied or a pointer to the message is kept), and the sender can continue execution without waiting. The link’s capacity is finite, however. If the link is full, the sender must block it until space is available in the queue.

Unbounded capacity:- The queue’s length is potentially infinite; thus, any number of messages can wait in it. The sender never blocks.

The zero-capacity case is sometimes referred to as a message system with no buffering the other cases are referred to as systems with automatic buffering

 

Examples of IPC Systems

An Example: POSIX Shared Memory

Leave a Reply

Your email address will not be published.