You may work in your duet partnership on this assignment if you wish.
We have provided some starting files. Like other warmups, and unlike your project, the design is well specified so that you can just fill in the implementation.
As with any container class, there are two very separate jobs for you to do - get the structure you're storing in the hash table ready by providing the set of functions it needs and writing the hash table code.
Here are some files to start you out.
Your exercise today is to complete this hash table implementation by providing definitions for the functions specified in the header files. The function generate_hash and get_key merits some additional explanation, which appears below in this document. The rest of the functions are specified in the header files in some detail; please review those files carefully. For our exercise, we assume that we are storing (key,value) pairs such as ("uchicago", "60637"). The key being stored is an unsigned long int, but the value is a void pointer that can be used to store any type of value.
When displaying the hash table (print_hash_table_entries), you must follow our format: the bucket indexes must appear as numbers at the left, enclosed in square brackets, one per line, with the contents of each bucket following it. If the hash-table is chained, each Key,Value pair is shown in parenthesis with a "->" symbol. The key is what get_key returns, and the value is what print_word returns.
You need two functions from word.c - a compare function and a get_key function. You will use compare_words that you already wrote in a previous assignment. Then you will need to write get_key as specified below.
You need to write a function (get_key) to get from the wordcount structs stored in the hash table to a key. In this case, the key is not guaranteed to be unique because the keyspace is larger than the 2^64. However, our compare operation in wordcount does more than compare keys - it compares words - so we're okay.
Follow this recipe for computing an unsigned long int key for a string:
(This hashing algorithm recipe is from Effective Java by Joshua Bloch.)
For example, the hash code for the word FORK, given that the ASCII values of the characters in FORK are 70, 79, 82 and 75, is as follows:
res = 17 res = 37*17+70 = 699 res = 37*699+79 = 25942 res = 37*25942+82 = 959936 res = 37*959936+75 = 35517707 (As you can see, the numbers get big.)
Now that your wordcount struct has produced an integer key, the hash table needs to take that key and map it to a location within the hash table. You need to convert this long integer into an integer index into the hash table array (generate_hash).
We will go with the simplest of approaches here. To identify the index of FORK in a hash table of size, say, 33, you compute the remainder of the key when divided by 33. In this case 35517707%33 is 5, so FORK would be inserted into and/or located in the bucket with index 5 (the sixth bucket in the table).
Once you have implemented the hash function above, you will need to implement methods to handle collisions in your hash table. Instead of choosing a single function, the purpose of this assignment is to analyze the performance of three popular mechanisms - chaining (linked-list), linear probing, and quadratic probing.
Please note that in creating and using the hash table functions, you can only use the generic create_hash_table(), insert() and search() functions. You must store the insertion and searching function pointers in the hash_table_t struct when creating the hash table using the create_hash_table() function. Since we're interested in the performance of these individual collisions handling mechanisms, we will need to measure the number of comparisons needed to perform the requested operation (insert or search). This can be done by passing an integer value (opcount) by reference into each of the functions. Starting from 0, for every entry of the hash table (or chained list item) that has been visited (and, therefore compared), you should increment opcount by 1. This will allow us to see exactly how many operations it takes for each of the mechanisms to insert and search through the hash table.
The insert functions return an integer value, denoting success or failure of the insert (1 on success, 0 on failure), while the search funcitons, while the search functions return the node item that matches the key.
We want to be able test the correctness and evaluate the relative merits/demerits of each collision handling mechanism. For this, we should read key-value pairs from a file and insert/search them in sequence. The input files should have exactly one word per line, without duplicates. We have provided the functions insert_from_file() and search_from_file() to perform insert and search operations respectively. The function do_all_tests() in main.c can be used to test all three hash table implementations on the same file and print out the average operations required for each hash table type.
Given your implementation and the results you have seen of the various collision resolution mechanims, write a technical paper, that reports on what you found. This report should be named "[CNetID]-report.pdf", and added to your repository in the following format: