There are 3 possible projects that we're recommending: 1. RELIABLE TRANSMISSION OVER UDP You have to build a layer over UDP that will ensure that the packets are delivered in sequence and without loss. You can either use acks/retransmissions or Forward Error Correction (FEC) mechanisms to achieve this. A typical scenario will have a sender and a receiver on different machines. Your implementation will be evaluated based on 2 metrics - the percentage of useful bytes transferred and the total transfer time. The sender will either generate random data or use a file stored in a ramdisk (the idea is to read the data from the memory instead of the hard disk so that the total transfer time is not affected significantly). Your program will receive two parameters: a loss rate and a re-ordering rate. You have to simulate packet losses and re-orders at either the sender side or the receiver side based on the packet loss rate and re-ordering parameters. There should be some mechanism to verify the step by step working of your protocol without running your programs in a debugger. How you do this is up to you. A suggestion is to have 2 modes of operation for your programs - batch mode & debuggable mode. In the batch mode, an entire file will be transferred. In the debuggable mode, the user should be able to control the execution of the program. 2. LOAD BALANCING AND FAULT-TOLERANCE OF WEB SERVERS This is the actualy implementation of your homework 2. Consider that you have multiple web servers at your disposal. You have to build the middleware to intercept client requests and distribute them to different servers based on the existing workload. This middleware consists of a client that issues one request and receives one response (like a web browser) and a set of processes that receive the request, replicate it, send them to front ends, etc. Note that the number of processes and the design of the software is up to you. Also, your scheme must be robust (like in homework 2) to both webserver and front end crashes but not to client crashes (that is, assume the clients do not crash). Further, if you use a request replicator, before the requests reach the front end, you can assume that this request replicator is fault free. The client requests can be generated using WebStone, which will be available for your use in the test network. The metric you should use is response time for each request (average, min and max; in fact having a report that shows the distribution of response times along a time axis is a good idea). 3. Distributed make utility This project will probably require compiler knowledge, since it requires a program that will parse a Makefile and generate a tree of execution. The idea is to find parallelism for a make process and distribute the tasks efficiently among different machines (running NFS/AFS) to speed up the compilation process. You have to make it work for something like a Linux kernel compilation. Contact either mosse@cs.pitt.edu or src@cs.pitt.edu for further details. 4. you can also propose your own project; which should be equivalent to one of the projects above. Suggested Readings: TCP: RFC 793 (RFCs are documents called "Request For Comments" that present new ideas for internet standards with respect to the Internet; they can be found at http://www.ietf.org/rfc.html) FEC: http://www.ece.wpi.edu/courses/ee535/hwk97/hwk4cd97/bad/paper.html WebStone: http://www.mindcraft.com/webstone/ RamDisk: http://www.linuxfocus.org/English/November1999/article124.html Test Bed: A cluster of 7 Redhat 7.2 servers(linux kernel version 2.4.7-10), gcc version 2.96. More information on this later, but you can start development in any equivalent system. You must run your programs on the cluster, so that we can control