From 8a0d8085a2cf98ba0981d73150ca71e762352c5e Mon Sep 17 00:00:00 2001 From: Daniel Tomlinson Date: Tue, 28 Sep 2021 00:29:41 +0100 Subject: [PATCH] updating documentation --- docs/dataflow/index.md | 3 --- docs/dataflow/scaling.md | 2 +- docs/discussion/approach.md | 2 +- docs/discussion/cleaning.md | 4 ++-- docs/documentation/installation.md | 21 +++++++++++++-------- docs/documentation/usage.md | 17 ++++++++++------- docs/index.md | 2 +- 7 files changed, 28 insertions(+), 23 deletions(-) diff --git a/docs/dataflow/index.md b/docs/dataflow/index.md index a21fc9a..70cd31a 100644 --- a/docs/dataflow/index.md +++ b/docs/dataflow/index.md @@ -27,9 +27,6 @@ To get around public IP quotas I created a VPC in the `europe-west1` region that Assuming the `pp-2020.csv` file has been placed in the `./input` directory in the bucket you can run a command similar to: -!!! caution - Use the command `python -m analyse_properties.main` as the entrypoint to the pipeline and not `analyse-properties` as the module isn't installed with poetry on the workers with the configuration below. - ```bash python -m analyse_properties.main \ --runner DataflowRunner \ diff --git a/docs/dataflow/scaling.md b/docs/dataflow/scaling.md index 6cd10a3..66d6b1c 100644 --- a/docs/dataflow/scaling.md +++ b/docs/dataflow/scaling.md @@ -55,7 +55,7 @@ A possible solution would be to leverage BigQuery to store the results of the ma In addition to creating the mapping table `(key, value)` pairs, we also save these pairs to BigQuery at this stage. We then yield the element as it is currently written to allow the subsequent stages to make use of this data. -Remove the condense mapping table stage as it is no longer needed. +Remove the condense mapping table stage as it is no longer needed (which also saves a bit of time). Instead of using: diff --git a/docs/discussion/approach.md b/docs/discussion/approach.md index 252e0e5..343c917 100644 --- a/docs/discussion/approach.md +++ b/docs/discussion/approach.md @@ -21,7 +21,7 @@ The mapping table takes each row and creates a `(key,value)` pair with: - The key being the id across all columns (`id_all_columns`). - The value being the raw data as an array. -The mapping table is then condensed to a single dictionary with these key, value pairs and is used as a side input further down the pipeline. +The mapping table is then condensed to a single dictionary with these key, value pairs (automatically deduplicating repeated rows) and is used as a side input further down the pipeline. This mapping table is created to ensure the `GroupByKey` operation is as quick as possible. The more data you have to process in a `GroupByKey`, the longer the operation takes. By doing the `GroupByKey` using just the ids, the pipeline can process the files much quicker than if we included the raw data in this operation. diff --git a/docs/discussion/cleaning.md b/docs/discussion/cleaning.md index 314a829..8daaa58 100644 --- a/docs/discussion/cleaning.md +++ b/docs/discussion/cleaning.md @@ -64,7 +64,7 @@ We strip all leading/trailing whitespace from each column to enforce consistency Some of the data is repeated: -- Some rows repeated, with the same date + price + address information but with a unique transaction id. +- Some rows are repeated, with the same date + price + address information but with a unique transaction id.
Example (PCollection) @@ -87,7 +87,7 @@ Some of the data is repeated: ] }, { - "fd4634faec47c29de40bbf7840723b41": [ + "gd4634faec47c29de40bbf7840723b42": [ "317500", "2020-11-13 00:00", "B90 3LA", diff --git a/docs/documentation/installation.md b/docs/documentation/installation.md index fca1f70..c2ebd22 100644 --- a/docs/documentation/installation.md +++ b/docs/documentation/installation.md @@ -6,21 +6,26 @@ The task has been tested on MacOS Big Sur and WSL2. The task should run on Windo For Beam 2.32.0 the supported versions of the Python SDK can be found [here](https://cloud.google.com/dataflow/docs/concepts/sdk-worker-dependencies#sdk-for-python). -## Poetry +## Pip -The test uses [Poetry](https://python-poetry.org) for dependency management. - -!!! info inline end - If you already have Poetry installed globally you can go straight to the `poetry install` step. - -In a virtual environment install poetry: +In a virtual environment run from the root of the repo: ```bash -pip install poetry +pip install -r requirements.txt ``` +## Poetry (Alternative) + +Install [Poetry](https://python-poetry.org) *globally* + From the root of the repo install the dependencies with: ```bash poetry install --no-dev ``` + +Activate the shell with: + +```bash +poetry shell +``` diff --git a/docs/documentation/usage.md b/docs/documentation/usage.md index 1c19bbd..bf6192f 100644 --- a/docs/documentation/usage.md +++ b/docs/documentation/usage.md @@ -2,7 +2,7 @@ This page documents how to run the pipeline locally to complete the task for the [dataset for 2020](https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads#section-1). -The pipeline also runs in GCP using DataFlow and is discussed further on but can be viewed here. We also discuss how to adapt the pipeline so it can run against [the full dataset](https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads#single-file). +The pipeline also runs in GCP using DataFlow and is discussed further on but can be viewed [here](../dataflow/index.md). We also discuss how to adapt the pipeline so it can run against [the full dataset](https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads#single-file). ## Download dataset @@ -20,20 +20,20 @@ to download the data for 2020 and place in the input directory above. ## Entrypoint -The entrypoint to the pipeline is `analyse-properties`. +The entrypoint to the pipeline is `analyse_properties.main`. ## Available options Running ```bash -analyse-properties --help +python -m analyse_properties.main --help ``` gives the following output: ```bash -usage: analyse-properties [-h] [--input INPUT] [--output OUTPUT] +usage: analyse_properties.main [-h] [--input INPUT] [--output OUTPUT] optional arguments: -h, --help show this help message and exit @@ -43,14 +43,17 @@ optional arguments: The default value for input is `./data/input/pp-2020.csv` and the default value for output is `./data/output/pp-2020`. -If passing in values for `input`/`output` these should be **full** paths to the files. The test will parse these inputs as a `str()` and pass this to `#!python beam.io.ReadFromText()`. - ## Run the pipeline To run the pipeline and complete the task run: ```bash -analyse-properties --runner DirectRunner +python -m analyse_properties.main \ +--runner DirectRunner \ +--input ./data/input/pp-2020.csv \ +--output ./data/output/pp-2020 ``` +from the root of the repo. + The pipeline will use the 2020 dataset located in `./data/input` and output the resulting `.json` to `./data/output`. diff --git a/docs/index.md b/docs/index.md index 6164af6..9fc56e2 100644 --- a/docs/index.md +++ b/docs/index.md @@ -4,7 +4,7 @@ This documentation accompanies the technical test for the Street Group. -The following pages will guide the user through installing the requirements, and running the task to complete the test. In addition, there is some discussion around the approach, and any improvements that could be made. +The following pages will guide the user through installing the requirements, and running the task to complete the test. In addition, there is some discussion around the approach, and scaling the pipeline. Navigate sections using the tabs at the top of the page. Pages in this section can be viewed in order by using the section links in the left menu, or by using bar at the bottom of the page. The table of contents in the right menu can be used to navigate sections on each page.