Although I should probably do some error checking:
atol is pretty bullet proof, except if you pass it NULL
strtok could return NULL
Note: strtok returns the next token from a string. It keeps its own offset into the string, hence, after the first call, the string address is not necessary.
4. See man strtok for the details.
Fibonacci on the Ring of Processes
#include <stdio.h> /* for printf */
#include <stdlib.h> /* for atol */
#include <unistd.h> /* for read */
#include <string.h> /* for strtok */
#define BUFFSIZE 50
void get_integers(char buff[], long *first, long *second){
read(STDIN_FILENO, buff, BUFFSIZE);
*first = atol(strtok(buff, " "));
*second = atol(strtok(NULL, " "));
}
int main(void){
char buff[BUFFSIZE];
long first, second;
get_integers(buff, &first, &second);
printf("Got %ld and %ld\n", first, second);
return 0;
}
Fibonacci on the Ring of Processes
Play around with example01.c.
Note how it is prone to SIGSEGVs. That is Segmentation Faults, caused by incorrect memory access.
Tracking down SIGSEGVs. Two methods:
One using gdb or DDD, the other using valgrind
Usually one will work, often both.
Valgrind will find errors even if you program doesn't crash (yet).
Fibonacci on the Ring of Processes
First compile with the -g flag.
Start gdb/DDD with the name of the executable:
tabasco.Examples> gdb get_integers
Issue the run command with any command line arguments your executable expects.
(gdb) run
Look at the state when it crashed using the back trace command bt.
Read man gdb
Fibonacci on the Ring of Processes
Call valgrind with the name of the executable and any command line arguments your executable expects.
Read what it says.
Reporting the result is pretty routine, though a bit ugly to typeset nicely (pretty print):
void report_sum(int i,long fst,long old_snd,long snd,long thrd){
fprintf(stderr,
"\nProcess %d
with PID %d
and parent PID %d
recieved %ld %ld
and sent %ld %ld.\n",
i,
(int)getpid(),
(int)getppid(),
fst,
old_snd,
snd,
thrd); }
Fibonacci on the Ring of Processes
int main(int argc, char *argv[ ]){
int i;
char buff[BUFFSIZE];
long first;
long second;
long third ;
/* make the ring here */
{
/* ring process code */
long old_second;
if(i == 1){
/* mother of all processes */
send_numbers(buff, 1,1); }
/* all processes */
get_integers(buff, &first, &second);
old_second = second;
safe_sum(first,&second,&third);
if(i > 1){
send_numbers(buff, second, third);
report_sum( i, first, old_second, second, third); }
else {
fprintf(stderr,
"\n\nfibonacci(%d) = %ld\n\n",
nprocs,
first); }
exit (EXIT_SUCCESS);
}
} /* end of main program here */
Fibonacci on the Ring of Processes
Review the full listing:
fibonacci.c
Patterns of Parallelisation
From our study of pipes, we can start to see that problems can be broken down in different ways.
The simplest pattern of parallelism is Result Parallelism (these are also referred to as embarrassingly parallel problems)
Each result comprises part of the solution to a problem and can be calculated without any communication with the other processes.
Parallelism can be fine-grained or coarse-grained
Patterns of Parallelisation
Patterns of Parallelisation
Most problems will have some processing inter-dependencies (for example our fibonacci sequence and operations such as matrix multiplication.)
We can refer to these as Sequential Dependencies where part of the computation requires synchronisation and communication to solve the problem.
Patterns of Parallelisation
Patterns of Parallelisation
One variation on agenda parallelism that is of particular interest is Reduction
Reduction involves calculating a single aggregate result from multiple results.
In this arrangement, processes 1 through n calculate an intermediate result and a final process performs the reduction operation.
Patterns of Parallelisation
Patterns of Parallelisation
An agenda parallel problem with more tasks than processors must use a clumping approach, where each processor is responsible for a selection of tasks that are partitioned to it.
On a cluster parallel computer, the clumping approach is often implemented using a master-worker patterns
Effectively, the master processor is responsible for the agenda and is responsible for sending tasks to a set of workers.
Each worker completes the task and sends the result to the master.
Patterns of Parallelisation
Summary
Fibonacci on the ring of processes
Patterns of Parallelism
Questions?
Reading
Chapter 15 from Advanced Programming in the UNIX environment