You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
nitowa 681b09e5e0 more merging algorithms, add bench script 2 vuotta sitten
config/db more merging algorithms, add bench script 2 vuotta sitten
spark-packages working graph implementation and improved shell scripts 2 vuotta sitten
src/spark more merging algorithms, add bench script 2 vuotta sitten
.gitignore more merging algorithms, add bench script 2 vuotta sitten
README.md add clarification to README 2 vuotta sitten
bench.py more merging algorithms, add bench script 2 vuotta sitten
clean.py progress on mapping data, finding clusters, probably inefficient 2 vuotta sitten
settings.json checkpoint dir to settings, rename main_back to main_with_collect 2 vuotta sitten
setup.py more merging algorithms, add bench script 2 vuotta sitten
small_test_data.csv progress on mapping data, finding clusters, probably inefficient 2 vuotta sitten
start_services.sh working graph implementation and improved shell scripts 2 vuotta sitten
submit.sh working graph implementation and improved shell scripts 2 vuotta sitten
submit_graph.sh working graph implementation and improved shell scripts 2 vuotta sitten
submit_partition.sh union find with partition clustering 2 vuotta sitten

README.md

Project Description

TODO

Installation

Prerequisites:

For the graph implementation specifically you need to install graphframes manually from a third party since the official release is incompatible with spark 3.x (pull request pending). A prebuilt copy is supplied in the spark-packages directory.

Setting up

  • Modify settings.json to reflect your setup. If you are running everything locally you can use start_services.sh to turn everything on in one swoop. It might take a few minutes for Cassandra to become available.
  • Load the development database by running python3 setup.py from the project root. Per default this will move small_test_data.csv into the transactions table.

Deploying:

  • Start the spark workload by either running submit.sh (slow) or submit_graph.sh (faster)
  • If you need to clean out the Database you can run python3 clean.py. Be wary that this wipes all table definitions and data.