updating documentation

This commit is contained in:
2021-09-28 00:29:41 +01:00
parent c481c1a976
commit 8a0d8085a2
7 changed files with 28 additions and 23 deletions

View File

@@ -27,9 +27,6 @@ To get around public IP quotas I created a VPC in the `europe-west1` region that
Assuming the `pp-2020.csv` file has been placed in the `./input` directory in the bucket you can run a command similar to: Assuming the `pp-2020.csv` file has been placed in the `./input` directory in the bucket you can run a command similar to:
!!! caution
Use the command `python -m analyse_properties.main` as the entrypoint to the pipeline and not `analyse-properties` as the module isn't installed with poetry on the workers with the configuration below.
```bash ```bash
python -m analyse_properties.main \ python -m analyse_properties.main \
--runner DataflowRunner \ --runner DataflowRunner \

View File

@@ -55,7 +55,7 @@ A possible solution would be to leverage BigQuery to store the results of the ma
In addition to creating the mapping table `(key, value)` pairs, we also save these pairs to BigQuery at this stage. We then yield the element as it is currently written to allow the subsequent stages to make use of this data. In addition to creating the mapping table `(key, value)` pairs, we also save these pairs to BigQuery at this stage. We then yield the element as it is currently written to allow the subsequent stages to make use of this data.
Remove the condense mapping table stage as it is no longer needed. Remove the condense mapping table stage as it is no longer needed (which also saves a bit of time).
Instead of using: Instead of using:

View File

@@ -21,7 +21,7 @@ The mapping table takes each row and creates a `(key,value)` pair with:
- The key being the id across all columns (`id_all_columns`). - The key being the id across all columns (`id_all_columns`).
- The value being the raw data as an array. - The value being the raw data as an array.
The mapping table is then condensed to a single dictionary with these key, value pairs and is used as a side input further down the pipeline. The mapping table is then condensed to a single dictionary with these key, value pairs (automatically deduplicating repeated rows) and is used as a side input further down the pipeline.
This mapping table is created to ensure the `GroupByKey` operation is as quick as possible. The more data you have to process in a `GroupByKey`, the longer the operation takes. By doing the `GroupByKey` using just the ids, the pipeline can process the files much quicker than if we included the raw data in this operation. This mapping table is created to ensure the `GroupByKey` operation is as quick as possible. The more data you have to process in a `GroupByKey`, the longer the operation takes. By doing the `GroupByKey` using just the ids, the pipeline can process the files much quicker than if we included the raw data in this operation.

View File

@@ -64,7 +64,7 @@ We strip all leading/trailing whitespace from each column to enforce consistency
Some of the data is repeated: Some of the data is repeated:
- Some rows repeated, with the same date + price + address information but with a unique transaction id. - Some rows are repeated, with the same date + price + address information but with a unique transaction id.
<details> <details>
<summary>Example (PCollection)</summary> <summary>Example (PCollection)</summary>
@@ -87,7 +87,7 @@ Some of the data is repeated:
] ]
}, },
{ {
"fd4634faec47c29de40bbf7840723b41": [ "gd4634faec47c29de40bbf7840723b42": [
"317500", "317500",
"2020-11-13 00:00", "2020-11-13 00:00",
"B90 3LA", "B90 3LA",

View File

@@ -6,21 +6,26 @@ The task has been tested on MacOS Big Sur and WSL2. The task should run on Windo
For Beam 2.32.0 the supported versions of the Python SDK can be found [here](https://cloud.google.com/dataflow/docs/concepts/sdk-worker-dependencies#sdk-for-python). For Beam 2.32.0 the supported versions of the Python SDK can be found [here](https://cloud.google.com/dataflow/docs/concepts/sdk-worker-dependencies#sdk-for-python).
## Poetry ## Pip
The test uses [Poetry](https://python-poetry.org) for dependency management. In a virtual environment run from the root of the repo:
!!! info inline end
If you already have Poetry installed globally you can go straight to the `poetry install` step.
In a virtual environment install poetry:
```bash ```bash
pip install poetry pip install -r requirements.txt
``` ```
## Poetry (Alternative)
Install [Poetry](https://python-poetry.org) *globally*
From the root of the repo install the dependencies with: From the root of the repo install the dependencies with:
```bash ```bash
poetry install --no-dev poetry install --no-dev
``` ```
Activate the shell with:
```bash
poetry shell
```

View File

@@ -2,7 +2,7 @@
This page documents how to run the pipeline locally to complete the task for the [dataset for 2020](https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads#section-1). This page documents how to run the pipeline locally to complete the task for the [dataset for 2020](https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads#section-1).
The pipeline also runs in GCP using DataFlow and is discussed further on but can be viewed here. We also discuss how to adapt the pipeline so it can run against [the full dataset](https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads#single-file). The pipeline also runs in GCP using DataFlow and is discussed further on but can be viewed [here](../dataflow/index.md). We also discuss how to adapt the pipeline so it can run against [the full dataset](https://www.gov.uk/government/statistical-data-sets/price-paid-data-downloads#single-file).
## Download dataset ## Download dataset
@@ -20,20 +20,20 @@ to download the data for 2020 and place in the input directory above.
## Entrypoint ## Entrypoint
The entrypoint to the pipeline is `analyse-properties`. The entrypoint to the pipeline is `analyse_properties.main`.
## Available options ## Available options
Running Running
```bash ```bash
analyse-properties --help python -m analyse_properties.main --help
``` ```
gives the following output: gives the following output:
```bash ```bash
usage: analyse-properties [-h] [--input INPUT] [--output OUTPUT] usage: analyse_properties.main [-h] [--input INPUT] [--output OUTPUT]
optional arguments: optional arguments:
-h, --help show this help message and exit -h, --help show this help message and exit
@@ -43,14 +43,17 @@ optional arguments:
The default value for input is `./data/input/pp-2020.csv` and the default value for output is `./data/output/pp-2020`. The default value for input is `./data/input/pp-2020.csv` and the default value for output is `./data/output/pp-2020`.
If passing in values for `input`/`output` these should be **full** paths to the files. The test will parse these inputs as a `str()` and pass this to `#!python beam.io.ReadFromText()`.
## Run the pipeline ## Run the pipeline
To run the pipeline and complete the task run: To run the pipeline and complete the task run:
```bash ```bash
analyse-properties --runner DirectRunner python -m analyse_properties.main \
--runner DirectRunner \
--input ./data/input/pp-2020.csv \
--output ./data/output/pp-2020
``` ```
from the root of the repo.
The pipeline will use the 2020 dataset located in `./data/input` and output the resulting `.json` to `./data/output`. The pipeline will use the 2020 dataset located in `./data/input` and output the resulting `.json` to `./data/output`.

View File

@@ -4,7 +4,7 @@
This documentation accompanies the technical test for the Street Group. This documentation accompanies the technical test for the Street Group.
The following pages will guide the user through installing the requirements, and running the task to complete the test. In addition, there is some discussion around the approach, and any improvements that could be made. The following pages will guide the user through installing the requirements, and running the task to complete the test. In addition, there is some discussion around the approach, and scaling the pipeline.
Navigate sections using the tabs at the top of the page. Pages in this section can be viewed in order by using the section links in the left menu, or by using bar at the bottom of the page. The table of contents in the right menu can be used to navigate sections on each page. Navigate sections using the tabs at the top of the page. Pages in this section can be viewed in order by using the section links in the left menu, or by using bar at the bottom of the page. The table of contents in the right menu can be used to navigate sections on each page.