mirror of
https://github.com/dtomlinson91/street_group_tech_test
synced 2025-12-22 11:55:45 +00:00
adding initial docs
This commit is contained in:
1
docs/discussion/approach.md
Normal file
1
docs/discussion/approach.md
Normal file
@@ -0,0 +1 @@
|
||||
# Approach
|
||||
115
docs/discussion/cleaning.md
Normal file
115
docs/discussion/cleaning.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Cleaning
|
||||
|
||||
In this page we discuss the cleaning stages and how best to prepare the data.
|
||||
|
||||
## Uniquely identify a property.
|
||||
|
||||
To uniquely identify a property with the data we have it is enough to have a Postcode and the PAON (or SAON or combination of both).
|
||||
|
||||
### Postcode
|
||||
|
||||
Because so few properties are missing a postcode (0.2% of all records) we will drop all rows that do not have one. We will drop some properties that could be identified uniquely with some more work, but the properties that are missing a postcode tend to be unusual/commercial/industrial (e.g a powerplant).
|
||||
|
||||
### PAON/SAON
|
||||
|
||||
The PAON has 3 possible formats:
|
||||
|
||||
- The street number.
|
||||
- The building name.
|
||||
- The building name and street number (comma delimited).
|
||||
|
||||
The SAON:
|
||||
|
||||
- Identifies the appartment/flat number for the building.
|
||||
- If the SAON is present (only 11.7% of values) then the PAON will either be
|
||||
- The building name.
|
||||
- The building name and street number.
|
||||
|
||||
Because of the way the PAON and SOAN are defined, if any row is missing **both** of these columns we will drop it. As only having the postcode is not enough (generally speaking) to uniquely identify a property.
|
||||
|
||||
!!! tip
|
||||
In a production environment we could send these rows to a sink table (in BigQuery for example), rather than drop them outright. Collecting these rows over time might show some patterns on how we can uniquely identify properties that are missing these fields.
|
||||
|
||||
We split the PAON as part of the cleaning stage. If the PAON contains a comma then it contains the building name and street number. We keep the street number in the same position as the PAON and insert the building name as a new column at the end of the row. If the PAON does not contain a comma we insert a blank column at the end to keep the number of columns in the PCollection consistent.
|
||||
|
||||
### Unneeded columns
|
||||
|
||||
To try keep computation costs/time down, I decided to drop the categorical columns provided. These include:
|
||||
|
||||
- Property Type.
|
||||
- Old/New.
|
||||
- Duration.
|
||||
- PPD Category Type.
|
||||
- Record Status - monthly file only.
|
||||
|
||||
Initially I was attempting to work against the full dataset so dropping these columns would make a difference in the amount of data that needs processing.
|
||||
|
||||
These columns do provide some relevant information (old/new, duration, property type) and these could be included back into the pipeline fairly easily. Due to time constraints I was unable to make this change.
|
||||
|
||||
In addition, I also dropped the transaction unique identifier column. I wanted the IDs calculated in the pipeline to be consistent in format, and hashing a string (md5) isn't that expensive to calculate with complexity $\mathcal{O}(n)$.
|
||||
|
||||
### General cleaning
|
||||
|
||||
#### Upper case
|
||||
|
||||
As all strings in the dataset are upper case, we convert everything in the row to upper case to enforce consistency across the dataset.
|
||||
|
||||
#### Strip leading/trailing whitespace
|
||||
|
||||
We strip all leading/trailing whitespace from each column to enforce consistency.
|
||||
|
||||
#### Repeated rows
|
||||
|
||||
Some of the data is repeated:
|
||||
|
||||
- Some rows repeated, with the same date + price + address information but with a unique transaction id.
|
||||
|
||||
<details>
|
||||
<summary>Example (PCollection)</summary>
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"fd4634faec47c29de40bbf7840723b41": [
|
||||
"317500",
|
||||
"2020-11-13 00:00",
|
||||
"B90 3LA",
|
||||
"1",
|
||||
"",
|
||||
"VERSTONE ROAD",
|
||||
"SHIRLEY",
|
||||
"SOLIHULL",
|
||||
"SOLIHULL",
|
||||
"WEST MIDLANDS",
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"fd4634faec47c29de40bbf7840723b41": [
|
||||
"317500",
|
||||
"2020-11-13 00:00",
|
||||
"B90 3LA",
|
||||
"1",
|
||||
"",
|
||||
"VERSTONE ROAD",
|
||||
"SHIRLEY",
|
||||
"SOLIHULL",
|
||||
"SOLIHULL",
|
||||
"WEST MIDLANDS",
|
||||
""
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
These rows will be deduplicated as part of the pipeline.
|
||||
|
||||
- Some rows have the same date + address information, but different prices.
|
||||
|
||||
It would be very unusual to see multiple transactions on the same date for the same property. One reason could be that there was a data entry error, resulting in two different transactions with only one being the real price. As the date column does not contain the time (it is fixed at `00:00`) it is impossible to tell.
|
||||
|
||||
Another reason could be missing building/flat/appartment information in this entry.
|
||||
|
||||
We **keep** these in the data, resulting in some properties having multiple transactions with different prices on the same date. Without a time or more information to go on, it is difficult to see how these could be filtered out.
|
||||
30
docs/discussion/exploration.md
Normal file
30
docs/discussion/exploration.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Data Exploration Report
|
||||
|
||||
A brief exploration was done on the **full** dataset using the module `pandas-profiling`. The module uses `pandas` to load a dataset and automatically produce quantile/descriptive statistics, common values, extreme values, skew, kurtosis etc. and produces a report `.html` file that can be viewed interatively in your browser.
|
||||
|
||||
The script used to generate this report is located in `./exploration/report.py` and can be viewed below.
|
||||
|
||||
<details>
|
||||
<summary>report.py</summary>
|
||||
```python
|
||||
--8<-- "exploration/report.py"
|
||||
```
|
||||
</details>
|
||||
|
||||
The report can be viewed by clicking the Data Exploration Report tab at the top of the page.
|
||||
|
||||
## Interesting observations
|
||||
|
||||
When looking at the report we are looking for data quality and missing observations. The statistics are interesting to see but are largely irrelevant for this task.
|
||||
|
||||
The data overall looks very good for a dataset of its size (~27 million records). For important fields there are no missing values:
|
||||
|
||||
- Every row has a price.
|
||||
- Every row has a unique transaction ID.
|
||||
- Every row has a transaction date.
|
||||
|
||||
Some fields that we will need are missing data:
|
||||
|
||||
- ~42,000 (0.2%) are missing a Postcode.
|
||||
- ~4,000 (<0.1%) are missing a PAON (primary addressable object name).
|
||||
- ~412,000 (1.6%) are missing a Street Name.
|
||||
9
docs/discussion/introduction.md
Normal file
9
docs/discussion/introduction.md
Normal file
@@ -0,0 +1,9 @@
|
||||
# Introduction
|
||||
|
||||
This section will go through some discussion of the test including:
|
||||
|
||||
- Data exploration
|
||||
- Cleaning the data
|
||||
- Interpreting the results
|
||||
- Deploying on GCP DataFlow
|
||||
- Improvements
|
||||
Reference in New Issue
Block a user