Processes that share memory can exchange data by writing and reading shared variables. As an example, consider two processes p and q that share some variable s. Then p can communicate information to q by writing new data in s, which q can then read.
The above discussion raises an important question. How does q know when p writes new information into s? In some cases, q does not need to know. For instance, if it is a load balancing program that simply looks at the current load in p's machine stored in s. When it does need to know, it could poll, but polling puts undue burden on the cpu. Another possibility is that it could get a software interrupt, which we discuss below. A familiar alternative is to use semaphores or conditions. Process q could block till p changes s and sends a signal that unblocks q. However, these solutions would not allow q to (automatically) block if s cannot hold all the data it wants to write. (The programmer could manually implement a bounded buffer using semaphores.) Moreover, conventional shared memory is accessed by processes on a single machine, so it cannot be used for communicating information among remote processes. Recently, there has been a lot of work in distributed shared memory over LANs, which you will study in 203/243, which tends to be implemented using interprocess communication. However, even if we could implement shared memory directly over WANs (without message passing), it is not an ideal abstraction for all kinds of IPC. In particular, it is not the best abstraction for sending requests to servers, which requires coding of these requests as data structures (unless these data structures are encapsulated in monitors, which we shall study later). As we shall see later, message passing (in particular, RPC) is more appropriate for supporting client-server interaction.