Skip to content

Commit 54dcca0

Browse files
committed
Update 03-mpi-api slides
- Add information about isend/irecv, synchronization and MPI_Barrier specifically - Extend 03-mpi-api lecture plan
1 parent 34cc58e commit 54dcca0

File tree

2 files changed

+90
-2
lines changed

2 files changed

+90
-2
lines changed

03-mpi-api/03-mpi-api.tex

+87-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,90 @@
6464
\tableofcontents
6565
\end{frame}
6666

67-
\section{MPI collective operations}
67+
\section{Advanced Send/Receive API}
68+
69+
\begin{frame}{Why Using \texttt{MPI\_Send} and \texttt{MPI\_Recv} Is Not Enough?}
70+
Blocking Operations \texttt{MPI\_Send} and \texttt{MPI\_Recv} are blocking, causing processes to wait until communication completes.
71+
So they are the reason of:
72+
\begin{itemize}
73+
\item \textbf{Performance Bottlenecks:} Blocking calls can lead to idle CPU time, reducing parallel efficiency.
74+
\item \textbf{Lack of Overlap:} Cannot overlap computation with communication, limiting optimization opportunities.
75+
\item \textbf{Scalability Issues:} As the number of processes increases, blocking operations can significantly degrade performance.
76+
\end{itemize}
77+
\end{frame}
78+
79+
\begin{frame}{\texttt{MPI\_Isend}}
80+
Non-Blocking Send function. Initiates a send operation that returns immediately.
81+
82+
\texttt{int MPI\_Isend(const void *buf, int count, MPI\_Datatype datatype, int dest, int tag, MPI\_Comm comm, MPI\_Request *request);}
83+
84+
Parameters:
85+
86+
\begin{itemize}
87+
\item buf: Initial address of send buffer
88+
\item count: Number of elements to send
89+
\item datatype: Data type of each send buffer element
90+
\item dest: Rank of destination process
91+
\item tag: Message tag
92+
\item comm: Communicator
93+
\item request: Communication request handle
94+
\end{itemize}
95+
Usage: Allows the sender to proceed with computation while the message is being sent.
96+
\end{frame}
97+
98+
\begin{frame}{\texttt{MPI\_Irecv}}
99+
Non-Blocking Receive function. Initiates a receive operation that returns immediately.
100+
\texttt{int MPI\_Irecv(void *buf, int count, MPI\_Datatype datatype, int source, int tag, MPI\_Comm comm, MPI\_Request *request);}
101+
102+
Parameters:
103+
104+
\begin{itemize}
105+
\item buf: Initial address of receive buffer
106+
\item count: Maximum number of elements to receive
107+
\item datatype: Data type of each receive buffer element
108+
\item source: Rank of source process or \texttt{MPI\_ANY\_SOURCE}
109+
\item tag: Message tag or \texttt{MPI\_ANY\_TAG}
110+
\item comm: Communicator
111+
\item request: Communication request handle
112+
\end{itemize}
113+
Usage: Allows the receiver to proceed with computation while waiting for the message.
114+
\end{frame}
115+
116+
\section{Synchronization}
117+
118+
\begin{frame}{What is synchronization in MPI?}
119+
Synchronization mechanisms are essential to coordinating processes.
120+
Sometimes we need to ensure that particular action has been already completed.
121+
122+
Synchronization facts:
123+
124+
\begin{itemize}
125+
\item Process Coordination: Mechanism to ensure processes reach a certain point before proceeding
126+
\item Data Consistency: Ensures all processes have consistent data before computations
127+
\item Types of Synchronization:
128+
\begin{itemize}
129+
\item Point-to-point synchronization: It involves explicit sending and receiving of messages between two processes using functions like \texttt{MPI\_Send} and \texttt{MPI\_Recv}
130+
\item Collective synchronization: Collective operations (see next slides) are used, where all processes must participate
131+
\end{itemize}
132+
\item Importance: Prevents race conditions and ensures program correctness
133+
\end{itemize}
134+
\end{frame}
135+
136+
\begin{frame}{\texttt{MPI\_Barrier}}
137+
Global Synchronization function. It blocks processes until all of them have reached the barrier.
138+
139+
\texttt{int MPI\_Barrier(MPI\_Comm comm);}
140+
141+
Usage:
142+
143+
\begin{itemize}
144+
\item Ensures all processes have completed preceding computations
145+
\item Commonly used before timing code segments for performance measurement
146+
\item Typical use case: Synchronize before starting a collective operation
147+
\end{itemize}
148+
\end{frame}
149+
150+
\section{Collective operations}
68151

69152
\begin{frame}{Collective operations}
70153
\end{frame}
@@ -87,6 +170,9 @@ \section{MPI collective operations}
87170
\begin{frame}{Reduction}
88171
\end{frame}
89172

173+
\begin{frame}{All API have not blocking versions}
174+
\end{frame}
175+
90176
\begin{frame}
91177
\centering
92178
\Huge{Thank You!}

03-mpi-api/03-mpi-api.toc

+3-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,3 @@
1-
\beamer@sectionintoc {1}{MPI collective operations}{3}{0}{1}
1+
\beamer@sectionintoc {1}{Advanced Send/Receive API}{3}{0}{1}
2+
\beamer@sectionintoc {2}{Synchronization}{6}{0}{2}
3+
\beamer@sectionintoc {3}{Collective operations}{8}{0}{3}

0 commit comments

Comments
 (0)