Course Notes Parallel Algorithms (WISM 459), 2016/2017
Parallel Algorithms (WISM 459), 2016/2017
Teaching assistant: Abe Wits.
Time and place
Every Wednesday from 10.00-12.45 hour at
Utrecht University, campus De Uithof.
Location of lectures: room
219 in the Buys Ballot building.
First lecture: September 14, 2016.
Last lecture: December 21, 2016.
No class on October 12.
Each session consists of 2 times 45 min. lectures and one exercise class
(or computer laboratory class, or question session) of 45 min.
This is a Master's course which is an elective course for students
of the Master's programme Mathematical Sciences
of Utrecht University, part of the Scientific Computing specialisation.
The course is recommended for students in Mathematics or Computer Science
who are interested in scientific computing
and would like to obtain a first `hands-on experience'
in parallel programming.
It is also highly suitable for the Honours master
programme in Theoretical Physics and Mathematical Sciences
and for students interested in Computational Physics.
The course is part of the Dutch National Master
hence is open for all interested master students in Mathematics
in the Netherlands.
Registration for everybody is through Mastermath.
Students will learn how to design a parallel algorithm for a
problem from the area of scientific computing
and how to write a parallel program that
solves the problem.
After completion of the course, the student is able to
design a parallel algorithm for a problem from the area of scientific computing;
analyse the cost of the algorithm in terms of computing time, communication time, and synchronisation time;
- write a parallel program based on an algorithm that solves the problem;
- write a report on the algorithm, its analysis, and numerical experiments performed, in a concise and readable form,
- give a clear oral presentation of the algorithm, its analysis and performance.
You get 8 ECTS credit points.
Today, parallel computers are appearing on our desktops.
The advent of dual-core and quad-core computers and the expected increase
in the number of cores in the coming years, inevitably will cause a major change
in our approach to software, such as the software we use in scientific computations.
Parallel computers drastically increase our computational capabilities
and thus enable us to model more realistically in many application areas.
To make efficient use of parallel computers, it is necessary to reorganise
the structure of our computational methods.
In particular, attention must be paid to the division of work
among the different processors solving a problem in parallel
and to the communication between them. Suitable parallel algorithms and systems
software are needed to realise the capabilities of parallel computers.
We will discuss extensively the most recent developments in the area
of parallel computers, ranging from multi-core desktop PCs,
to clusters of PCs connected by switching devices,
to massively parallel computers with distributed memory
such as our national supercomputer Cartesius at SURFsara in Amsterdam.
The following subjects will be treated:
- Types of existing parallel computers
- Principles of parallel computation: distributing the work evenly
and avoiding communication
- The Bulk Synchronous Parallel (BSP) model as an idealised model
of a parallel computer
- Use of BSPlib software as the basis for architecture-independent
Parallel algorithms for the following problems:
inner product computation, sorting, prime number sieving, LU decomposition,
sparse matrix-vector multiplication, iterative solution of
linear systems, graph matching.
Analysis of the computation, communication, and synchronisation
time requirements of these algorithms by the BSP model.
The course will be given in English.
All reading material will be in English.
Introductory course in linear algebra.
Some knowledge of algorithms. Good knowledge of a modern
programming language such as C, C++, Java, or Python.
Basic knowledge of C is helpful, as we will use this language in class.
For a tutorial in C, if you come from another programming language,
see Appendix A (pages 345-364) of "21st Century C: C Tips from the New School,
2nd Edition", by Ben Klemens, O'Reilly 2014.
There are plenty of books on C;
you may consider the book
Practical C Programming by Steve Oualline.
You can also use C++ in class, if you already know that language.
We will make use of the recently released
MulticoreBSP for C software developed by Albert-Jan Yzelman.
This sofware runs on shared-memory multicore PCs, and you can also run
your program with only minor changes on distributed-memory machines
such as Cartesius.
Recommended: Bring your own device! In certain weeks, marked below by BYOD,
it is helpful to bring your own laptop, if you possess one,
for use during computer laboratory sessions. Please install the MulticoreBSP library.
On Macs and Linux computers this is straightforward. On Windows machines
you need a UNIX emulator which runs Pthreads, and it
is more difficult to get the software running.
If you do not possess a laptop, perhaps your project partner does
(you are allowed to work in pairs). If this fails, we will find another solution for you.
You will get access to the national supercomputer Cartesius,
where you can install MulticoreBSP for C on one node,
giving access to at most 24 cores for thin nodes
or you can use BSPonMPI (already installed) giving access to thousands of cores.
The examination is in the form of an initial assignment (30%), a final assignment (40%),
a presentation on the final assignment (15%), and homework (15%).
The two assignments are carried out in the exercise/laboratory class and at home.
A written report must be returned for each assignment before the deadline.
Students can work individually or in pairs (but not in larger teams) on the computer programs
and can hand in a single report, provided each did a fair share of the work and can account for that.
Presentations will be on 14 and 21 December and will be individual.
Homework must be made individually. If needed, you will have to explain your answers to the homework exercises.
There will be 4 times homework to be handed in, spread over the semester,
one for each chapter of the book covered (chapters 1, 2, 4, 5).
All students should submit reports for the assignments
electronically in PDF format by email,
to the teacher, Rob Bisseling.
All homework must be handed in as hardcopy though.
All students must use LaTeX for the assignments; handwritten is OK for the
For the first assignment, you will have to submit your parallel program to an automated testing system,
Utrecht students (or those who had an Utrecht F-number in the past)
can use their Solis-id, others should create a new account.
To get everbody started in time, you will have to write a sequential prime
number sieve and submit it before September 29. For questions on the Domjudge system
please ask Abe Wits.
Instructions for use will become available soon.
We closely follow the book
Scientific Computation: A Structured Approach using BSP and MPI (PSC),
by Rob H. Bisseling,
Oxford University Press,
March 2004. ISBN 0-19-852939-2.
Please note that the book is now
available by the printing-on-demand system of Oxford University Press,
at the book's OUP website.
If you want to buy the book, contact me first,
as I can provide the participants with a discount.
I am currently working on a second edition (PSC2),
and some additional material will be given to you
(and it will be tested on you!).
In addition, all my slides and WWW links to
background material are provided through this course page.
(Pointers to other relevant links are welcome!)
LaTeX sources (in Beamer) and PDFs of all my slides
(18 Mbyte in gzipped tar format).
The sources may be of help to other teachers
who want to teach from my book.
Students may find them helpful too.
Last update of the files: November 6, 2014.
The slides are complete
and they cover every section of the book (first edition).
They have been converted to
the Beamer macros in September 2012, and have been made up to date in the process.
I am adding experimental results on recent computer architectures as we go.
You can also obtain the PDFs separately. Each PDF file represents one
lecture of about 45 minutes.
Chapter 1: sections
Chapter 2: sections
Chapter 3: sections
Chapter 4: sections
Programming in BSP style using MPI,
Slides for the second edition
We will develop them as the second edition is written.
Beta-versions will be provided below as they become available over the coming years,
as separate links.
U. Naumann and O. Schenk (eds), Combinatorial Scientific Computing,
CRC Press, 2012.
J. J. Dongarra, I. S. Duff, D. C. Sorensen, and H. A. van der Vorst,
Numerical Linear Algebra for High-Performance Computers,
G. H. Golub and C. F. van Loan, ``Matrix Computations",
Fourth edition, The Johns Hopkins University Press, 2012.
Some weekly summaries and links
Wednesday September 14, 2016. Room 219 Buys Ballot building
The Bulk Synchronous Parallel model
(PSC section 1.2)
What is a parallel computer? Why do we compute in parallel?
The Bulk Synchronous Parallel (BSP) model by Valiant
comprises an abstract machine architecture, a framework for
and a cost function for analysing the run time of algorithms.
The BSP architecture is a set of processor-memory pairs
connected by a black box communication network.
BSP algorithms consist of computation supersteps
and communication supersteps.
Communication is costed as h-relations.
The BSP cost of an algorithm is expressed
in machine parameters p, g, l.
The computing rate is r flop/s.
Motivation of the cost model: bottlenecks at the entrance
and exit of the communication network.
Parallel Inner Product Computation
(PSC section 1.3)
My First Parallel Algorithm: the computation
of the inner product of two vectors.
Possible distributions: cyclic and block.
Data distribution leads naturally to work distribution.
Communication is needed to add local inner products into a global
Single Program, Multiple Data (SPMD) approach:
we don't want to write 400 different programs for a machine with
One-sided communication is a great invention
(in parallel computing, at least;).
In class: Exercise 1.2.
Please hand in Exercise 1.1 as Homework 1 (HW1) on September 21, 2016.
- Top 500 of supercomputers
in the world.
in Amsterdam, home of the Cartesius national supercomputer
which we will use in our laboratory class and homework. It is a Bullx machine.
Cartesius currently has 40960 cores organised in thin and fat nodes.
Its peak performance is 1559 Tflop/s.
Wednesday September 21, 2016.
BSPlib, the Bulk Synchronous Parallel library.
(PSC section 1.4)
(PSC sections 1.5-1.7)
Computer Laboratory BYOD
(from 12.00-12.45 hour):
starting with BSPlib.
This is your first hands-on session with BSPlib.
Download the latest version of
my package of educational programs that
teaches how to use BSP.
Note that a beta-version for the second edition will also become available soon.
Preferably use this version.
Solve Exercise 1.3: try to run the benchmark,
exploring your parallel environment: your own laptop
and Cartesius. Change puts into gets.
BYOD: If you own a laptop, bring it to class,
so we can install MulticoreBSP for C and check your software
Wednesday September 28, 2016.
Parallel sample sort
New material, presenting the BSP approach to sorting by regular sampling.
Will become a section in Chapter 1 of PSC2.
Sequential LU Decomposition
(PSC sections 2.1-2.2)
Discussion of Homework 1 (HW1). Answers to Exercise 1.2.
Starting Exercise 1.7 from PSC. Finish running your sequential prime number sieve on Domjudge.
Design a parallel algorithm
for prime number generation.
Write your first parallel
This program should generate all prime numbers below 1,000,000 in parallel.
Choosing the right distribution is the name of the game.
Hand in a report on Exercise 1.7 and submit your program to the checking system, before the deadline,
Wednesday October 19, 10.00 hour.
Note that you can work in pairs.
The format for your program is specified in the
primes program specification, TBA.
Guideline mathematical writing by the Department of Mathematics,
Utrecht University, a must for those writing a report for this course.
found to be useful if you want to learn some vi editor commands
on the Linux system.
Wednesday October 5, 2016. Today's lectures by Jan-Willem Buurlage (CWI)
Parallel LU Decomposition
(PSC section 2.3)
Developed by Abe Wits, Jan-Willem Buurlage, and Tom Bannink.
last year's lecture by Abe Wits
The Epiphany chip is a 16 core chip with only 32 kb RAM per core, and incomplete
documentation. It is also the world's most energy efficient processor! How to
write parallel C code for such a platform? What if you want to reuse your code
on a supercomputer and your laptop? We will discuss one possible solution: BSP,
a parallel programming model. Learn about programming on the Parallella chip,
the Epiphany architecture, BSP, streaming data structures and more!
The Parallella is like a 100$, overpowered, 18 core Raspberry Pi.
Continue work on the prime number sieve.
Wednesday October 12, 2016.
No class because of
SIAM workshop on Combinatorial Scientific Computing, Albuquerque, NM, USA
Wednesday October 19, 2016. A special day.
(PSC section 2.4)
Experiments with bsplu
(PSC sections 2.5-2.6)
Choose a new assignment. Possibilities:
solve Exercise 1.8 (new) on parallel sorting algorithms;
solve Exercise 2.5 on Cholesky factorisation;
develop a communication-avoiding LU decomposition with multiple -rank updates;
write a parallel program for maximum-cardinality matching in a bipartite graph;
solve Exercise 5.1 (new)write a parallel program for counting self-avoiding walks (SAWs)
on a graph or a 2D or 3D lattice;
solve Exercise 5.2 (new)write a parallel program for counting triangles in a social network graph.
For some of the projects you could use
on the Parallella board
Requests for different topics will be considered.
Please try to discuss the chosen topic with the teacher before mid-November.
Start working on Homework 2 (HW2) on Chapter 2, Exercise 2.2.
Hand it in on October 26.
Wednesday October 26, 2016
Sequential sparse matrix-vector multiplication
(PSC section 4.1)
Sparse matrices and their data structures
(PSC section 4.2)
Discussion of final project.
Wednesday November 2, 2016
Parallel sparse matrix-vector multiplication
(PSC section 4.3)
Cartesian matrix distributions
(PSC section 4.4)
Discussion of HW2.
Wednesday November 9, 2016.
Mondriaan sparse matrix distribution
(PSC section 4.5)
Medium-grain method for sparse matrix distribution
Slides on the medium-grain method by Daan Pelt and Rob Bisseling,
presented May 2014 at IPDPS'14.
A medium-grain method for fast 2D bipartitioning of sparse matrices by
D. M. Pelt and R. H. Bisseling,
Proceedings IEEE International Parallel & Distributed Processing Symposium 2014,
IEEE Press, pp. 529-539.
MondriaanOpt for optimal sparse matrix bipartitioning,
Database of optimally partitioned matrices for 2 processors,
with nice pictures.
Edge-based graph partitioning,
Slides of lecture on medium-grain and optimal partitioning, Sparse Days at St. Girons, France, June 30, 2015.
Wednesday November 16, 2016.
(PSC section 4.8)
Program bspmv and bulk synchronous message passing primitives
(PSC section 4.9)
Wikipedia page about speeding up search algorithms (such as employed in computer chess)
Homework HW3: Exercise 4.2, but with a 16 x 16 grid instead of 12 by 12.
If you use LaTeX, try Tikz to produce a nice figure.
Hand in November 23.
Wednesday November 23, 2016.
Parallel graph matching
(New material, PSC2 Chapter 5)
Wednesday November 30, 2016.
Guest lecture TBA
Wednesday December 7, 2016.
Parallel graph matching (continued)
Homework HW4 on parallel graph matching.
Register your final presentation.
Wednesday December 14, 2016.
Presentations final project
Wednesday December 21, 2016.
More presentations final project
Please fill out the
Mastermath digital evaluation form.
Deadline second assignment Monday January 16, 2017, 12.00 hour.
Hand in a report before the deadline.
Please use the batch system of Cartesius and not the interactive system
for your test runs. The interactive system is only for development.
Frequently asked questions
Other courses that teach the BSP model (among others)
Last update of this page: September 21, 2016
Back to homepage of Rob Bisseling