Lab 4 (5 points)
CS550, Operating Systems
Introduction to C-Style Dynamic Memory, POSIX Threads, Shared
Memory, and MPI Message Passing (IPC)
Name: _____________________________________________
This assignment is loosely based upon the UNIX labs from the CS2351 course at Oklahoma State University, available at cs.okstate.edu/newuser/index.html. To submit this assignment, you may copy and paste the assignment into a text editor such as nano, vi, notepad, MS Word, OpenOffice Writer, etc. Zip the code and scripts showing the output of your solutions, and submit the zip file to the dropbox for lab 4. The purpose of this lesson is to learn to about dynamic memory allocation, POSIX Threads, Shared Memory between processes, and MPI Message Passing (also known as Interprocess Communication) in the C programming language.
1. Complete the following program that requires dynamic array
allocation. Add code in place of the ". . ." locations
provided below. You may copy and paste the code below, but
be sure to convert it to ASCII, before attempting to compile.
#include <stdio.h>
#include <stdlib.h>
void print_array(int *, int, char *);
int * sum_arrays(int *, int *, int);
int main(int argc, char ** argv)
{
//Declare a pointer to an int
int * arrA;
int * arrB;
int size;
int i;
printf("Enter the array size: ");
scanf("%d", &size);
//Allocate memory for array A. This must be casted
as an int pointer
//because malloc returns a void pointer (void *). A
void pointer is a
//pointer with no assigned type.
arrA = (int *) malloc(sizeof(int)*size);
//Do the same for array B. Note that B will be the
same size as A.
...
//Store data in the arrays
for(i = 0; i < size; i++)
arrA[i] = arrB[i] = i;
int * arrC = sum_arrays(arrA, arrB, size);
//print the arrays
print_array(arrA, size, "A");
print_array(arrB, size, "B");
print_array(arrC, size, "C");
//Free (deallocate) arrays A, B
and C here.
free( (void *) arrA);
...
return 0;
}
//Complete the print_array and sum_array functions here.
//Hint: the output format for print array is up to you, but
should include the name of the array.
//Hint: you must allocate and return the array arrC within the
sum_arrays function.
...
2. Complete the following C program that uses six POSIX
threads that will add two matrices (arrays) of size 6 by n called
A and B and store them in a third array of size 6 by n called C,
where n is any integer greater than 0. This array should be
of type static double **.
You must allocate the array in your main function and fill it with
values. You may choose any values to place in your array,
but it is recommended that you use ones that are may be reproduced
and are easy to add. each thread must add one of the four
rows of A with the same row of B, storing the results in the
appropriate row of C. Your main function should print the
results of this addition. Part of the main method is
provided below. Hint: refer to the the code from lab 3.
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#define NUM_THREADS 6
static double ** matrixA;
//declare matrixB and matrixC here
...
static int d2_size;
//Define our thread function called matrix_add_thread
void * matrix_add_thread(void *);
int main(int argc, char ** argv)
{
pthread_t threadList[NUM_THREADS]; //Declare a list
of threads
int return_code;
int i; //counter to keep track of the current
thread identifier
int j;
printf("Enter the size of the second dimension of the
matrix: ");
scanf("%d", &d2_size);
matrixA = (double **) malloc(sizeof(double
*)*NUM_THREADS);
//allocate the first dimension of each matrix here
...
//Allocate the second dimension of each matrix
for(i = 0; i < NUM_THREADS; i++)
{
matrixA[i] = (double *)
malloc(sizeof(double)*d2_size);
...
}
//Initialize matrixA and matrixB here
...
for(i = 0; i < NUM_THREADS; i++)
{
printf("From main creating thread %d\n", i);
return_code =
pthread_create(&threadList[i], NULL, MatAddThread,
(void *) ((long)i));
if(return_code != NULL)
{
printf("The return code from
thread %d is %d\n", i, return_code);
exit(-1);
}
}
for(i = 0; i < NUM_THREADS; i++)
{
return_code = pthread_join(threadList[i],
NULL);
if(return_code != NULL)
{
printf("Unable to join thread
%d\n", i);
exit(-1);
}
}
for(i = 0; i < NUM_THREADS; i++)
for(j = 0; j < d2_size; j++)
printf("matrixC[%d][%d] = %lf\n"
, i, j, matrixC[i][j]);
for(i = 0; i < NUM_THREADS; i++)
{
free((void *) matrixA[i]);
//Free the other matrices here
...
}
free((void *) matrixA);
//Free matrixB and matrixC here
...
//Kill
the main thread.
pthread_exit(NULL);
return 0;
}
//Complete the matrix_add_thread function here
...
To compile the program above assuming it is named lab4Threads.c,
use the following command on any Linux system:
gcc lab4Threads.c -pthread -o lab4Threads.exe
3. Compile and run the two shared memory programs on the examples
page of the course website. Run both programs at the same
time in different windows. If you are using LittleFe, open
two PuTTY or ssh terminals to LittleFe. If you are using
BCCD, use two terminals to complete this program. Open one
program in one window and one in the other. Note that the
producer program must be started first.
To compile these programs above assuming they are named shmem_producer.c
and shmem_consumer.c, use the following commands on the
Littlefe cluster or another computer running the BCCD operating
system:
gcc shmem_producer.c -lrt -o shmemp.exe
gcc shmem_consumer.c -lrt -o
shmemc.exe
After running the two programs, modify the producer to put ten
digits (nine through zero), in shared memory. Once complete,
shared memory should contain 9876543210. This program should
pause for 1 second between each digit added (hint: use
sleep(1)). After a digit is added to the shared memory, be
sure to increment the shared memory pointer. Modify the
consumer to continuously print the data in shared memory.
The consumer should keep printing data for 15 seconds. You
will need to include time.h to allow for this action.
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
int main (int argc, char** argv)
{
int number_of_processes;
int my_rank;
int mpi_error_code;
mpi_error_code = MPI_Init(&argc, &argv);
mpi_error_code =
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
mpi_error_code =
MPI_Comm_size(MPI_COMM_WORLD, &number_of_processes);
//This is the server process.
if(my_rank == 0)
{
printf("Hello from the
server!\n");
mpi_error_code = MPI_Send("Hi
from server!", 16, MPI_CHAR, 1, 0, MPI_COMM_WORLD);
printf("Message was sent from
server!\n");
}
else
{
printf("Hello from the
client!\n");
char str[20];
mpi_error_code = MPI_Recv(str,
20, MPI_CHAR, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("The message received in the
client is: %s\n", str);
}
mpi_error_code = MPI_Finalize();
return 0;
}
Save this program in a file named messages.c on Stampede
and on Littlefe.
Compile your program on the login node using the following
command. Note that this is ok because the compilation uses
very few resources.
mpicc messages.c -O3 -o messages.exe
Note that the executable file name matches exactly with what will run in the batch script using ibrun.
On Stampede, you must submit a batch script to run your
program. This means your job waits in a virtual line (a
queue) before it may run. The batch queue scheduler on
Stampede is called SLURM (Simple Linux Utility for Resource
Management). Write a SLURM batch script for your program as
follows:
#!/bin/bash
#SBATCH -A TG-SEE120004 #Account number
#SBATCH -N 1 -n 2 #Request
1 node (blade) with 2 tasks (cores)
#SBATCH -J messages #Job name
#SBATCH -o msg.o%j #Output file
name
#SBATCH -p normal
#Queue to use
#SBATCH -t 00:01:00 #Run (wall) time 1
min
ibrun messages.exe
Save this batch script in a file called messages.sbatch
on Stampede.
Fix any errors in your program, and submit your program to the
batch queue using the following command:
sbatch messages.sbatch
Once your program finishes, the output will show up in a file
called msg.oJobNumber, where JobNumber is the job
number assigned to your job by Stampede. You can find this
file by using the ls command. Turn in the
contents of this file.
While your program is in the batch, run the following commands on
Stampede. If necessary, submit your program to the batch
queue again and quickly run the commands below. Note that
the up and down arrow keys can be used to scroll through
previously issued commands.
Recall that the following command allows for you to see all jobs
running on Stampede.
showq
You can check the status of your job using the following command:
squeue -u Username
Where Username is your Stampede username.
Note that you may have a program that runs in an infinite loop or
enters a condition called deadlock or livelock. To end such
a program that is running on Stampede, you may first determine the
job number using squeue as shown above and killing the program
with the following command:
scancel JobNumber
5. Modify the program above to send messages from the server to
15 different processes. This will require a loop to send
messages from the server (the "if" statement), but not in the
client (the "else" portion). Note that you will need to
modify the batch script to use a total of 16 tasks. Run this
program again using the command:
sbatch messages.sbatch
Once your program finishes, the output will show up in a file called msg.oJobNumber, where JobNumber is the job number assigned to your job by Stampede. You can find this file by using the ls command. Turn in your source code, your modified batch script, and the contents of the output file.