You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
nitowa 1b1c134cdf union find with partition clustering 3 weeks ago
config/db add db write to graph impl 1 month ago
spark-packages working graph implementation and improved shell scripts 1 month ago
src/spark union find with partition clustering 3 weeks ago
.gitignore add db write to graph impl 1 month ago
README.md add clarification to README 1 month ago
clean.py progress on mapping data, finding clusters, probably inefficient 1 month ago
settings.json checkpoint dir to settings, rename main_back to main_with_collect 1 month ago
setup.py progress on mapping data, finding clusters, probably inefficient 1 month ago
small_test_data.csv progress on mapping data, finding clusters, probably inefficient 1 month ago
start_services.sh working graph implementation and improved shell scripts 1 month ago
submit.sh working graph implementation and improved shell scripts 1 month ago
submit_graph.sh working graph implementation and improved shell scripts 1 month ago
submit_partition.sh union find with partition clustering 3 weeks ago

README.md

Project Description

TODO

Installation

Prerequisites:

For the graph implementation specifically you need to install graphframes manually from a third party since the official release is incompatible with spark 3.x (pull request pending). A prebuilt copy is supplied in the spark-packages directory.

Setting up

  • Modify settings.json to reflect your setup. If you are running everything locally you can use start_services.sh to turn everything on in one swoop. It might take a few minutes for Cassandra to become available.
  • Load the development database by running python3 setup.py from the project root. Per default this will move small_test_data.csv into the transactions table.

Deploying:

  • Start the spark workload by either running submit.sh (slow) or submit_graph.sh (faster)
  • If you need to clean out the Database you can run python3 clean.py. Be wary that this wipes all table definitions and data.