>
COSC330/530 Parallel and Distributed Computing
Lecture 21 - Unit Review
Dr. Mitchell Welch
Reading
- Review the self check exercises.
- Review the Sample Programming Questions (See the Revision Tile)
Summary
- Multi-process Programming
- POSIX Thread Programming
- Distributed Computing with MPI
- GPU Programming in CUDA
- Example Applications
- Answering Exam Questions
- Exam Review
Multi-process Programming
- The first module looked at the development of Multi process programs.
- Basically fun with
fork( ... )
- We covered off on the easy stuff:
- Then we looked at the IPC constructs
- [Pipes (with our ring of processes)(http://turing.une.edu.au/~cosc330/lectures/display_lecture.php?lecture=04#11)
- FIFOs
Multi-process Programming
- Important things:
- Understand the return values for the functions.
- Understand how to construct fans, chains and rings.
- Avoiding zombies and orphan processes!
- What happens if you don't
wait() for processes?
- The exec family of calls.
- Connecting pipes between processes.
- Sending data through pipes.
POSIX Thread Programming
POSIX Thread Programming
- Important points:
- Creating and Joining threads correctly.
- Semaphores and Mutexes - you should know the process and the code to set these up and use them.
- The flow of logic through the dining philosophers and producer/consumer implementations.
- Implementing code that is free of deadlocks.
Distributed Computing with MPI
- Flynn's Taxonomy for classifying processing architectures
- Performance Concepts
- MPI Basics - rank and size
- MPI Communication - send, receive, broadcast, reduce, gather, scatter
- Synchronisation with barriers.
- Tree and Butterfly communication structures.
Distributed Computing with MPI
- Important Points:
- Single program behaves differently on different nodes.
- Steps required to distribute data using broadcast, pack and unpack.
- Why is broadcast, reduce, gather and allgather more efficient?
- Fox's algorithm - setting up a grid topology and communicating in the torus.
- Performance evaluation of MPI programs.
GPU Programming in CUDA
- Data-Parallel Computing
- CUDA Thread organisation
- Memory organisation
- Synchronisation and Atomic Functions
- Warps and Divergence
Example Applications
- Matrix Multiplication
- Producers and Consumers
- Dining Philosophers
- Trapezoidal Rule
- Conway's Game of Life
Answering Exam Questions
- General Hints
- Put something down for each question! I can't give you marks for a blank page.
- Read the questions carefully - not all questions ask for a coded solution, some only ask for a description.
- Some questions may have multiple parts - answer each part for full marks.
- Review the self-check exercises. These are exam-styled questions that will allow you to check your coverage of the content.
Answering Exam Questions
- Focus on the verbs in the questions.
- The verbs are the 'doing' words.
- Make sure you do what they refer to.
- Common verbs include:
- Identify - Recognise and name
- Describe/Explain - Outline the key points of the subject
- Compare - Explain key similarities/differences
- Discuss - Points for and against
- Assess - Discuss the subject and come to a conclusion
Exam Review: Topics to Focus On
- Multi-Process Programming
- Review your system calls!
- You are required to understand the function of each call, the input parameters (types and what they are) and the return values.
- Focus on the main calls (e.g.
fork(), exec()-family, exit(), atexit(), open(), read(), dup2() etc.)
- Know your process states and the situations that result in zombies and orphans
Exam Review: Topics to Focus On
- Constructing fans/chains/trees of processes.
- Waiting and Joining for processes and threads.
- Review the ring of processes - understand how pipes can be connected in different ways.
- Understand the
pipe( ... ) system call examples.
- Review the
mmap and fcntl calls. How can these be used for inter-process communication and mutual exclusion?
Exam Review: Topics to Focus On
- Multithreaded Programming:
- Review the producers-consumers and dining philosophers.
- Understand how mutexes and semaphores can be applied to provide mutual exclusion and synchronization in these examples.
- You will be required to construct a C program that uses semaphores/conditional-waiting for synchronization and mutual exclusion.
Exam Review: Topics to Focus On
- Know your MPI system calls!
- Matrix multiplication examples
- Conways' game-of-life example - how might you implement this using a multi-process/multi-threaded/MPI approach?
- Performance analysis examples - how can you assess the performance of a concurrent implementation.
- Know how to implement the communication functions (send, receive, broadcast etc.)
Exam Review: Topics to Focus On
- CUDA Memory stores!
- Allocating memory and launching CUDA Kernels
- The Basics:
- Kernel invocation.
- Memory organization.
- Warps and Divergence.
- Performance implications of warps/divergence and improving efficiency
Exam Review: Topics to Focus On
- Review the self-check exercises - you should be able to answer every one of these questions.
- These are available for each week and have links to the material.
- Feel free to share your attempts of the discussion forums.
- I will comment and clarify if needed.
Summary
- Multi-process Programming
- POSIX Thread Programming
- Distributed Computing with MPI
- GPU Programming in CUDA
- Example Applications
- Answering Exam Questions
- Exam Review
Reading
- Review the self check exercises.
- Review the Sample Programming Questions (See the Revision Tile)