Finished Model Evaluation Metrics
This commit is contained in:
@@ -0,0 +1,354 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Boston Housing Data\n",
|
||||
"\n",
|
||||
"In order to gain a better understanding of the metrics used in regression settings, we will be looking at the Boston Housing dataset. \n",
|
||||
"\n",
|
||||
"First use the cell below to read in the dataset and set up the training and testing data that will be used for the rest of this problem."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.datasets import load_boston\n",
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"import numpy as np\n",
|
||||
"import tests2 as t\n",
|
||||
"\n",
|
||||
"boston = load_boston()\n",
|
||||
"y = boston.target\n",
|
||||
"X = boston.data\n",
|
||||
"\n",
|
||||
"X_train, X_test, y_train, y_test = train_test_split(\n",
|
||||
" X, y, test_size=0.33, random_state=42)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 1:** Before we get too far, let's do a quick check of the models that you can use in this situation given that you are working on a regression problem. Use the dictionary and corresponding letters below to provide all the possible models you might choose to use."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"That's right! All but logistic regression can be used for predicting numeric values. And linear regression is the only one of these that you should not use for predicting categories. Technically sklearn won't stop you from doing most of anything you want, but you probably want to treat cases in the way you found by answering this question!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# When can you use the model - use each option as many times as necessary\n",
|
||||
"a = 'regression'\n",
|
||||
"b = 'classification'\n",
|
||||
"c = 'both regression and classification'\n",
|
||||
"\n",
|
||||
"models = {\n",
|
||||
" 'decision trees': c,\n",
|
||||
" 'random forest': c,\n",
|
||||
" 'adaptive boosting': c,\n",
|
||||
" 'logistic regression': b,\n",
|
||||
" 'linear regression': a\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"#checks your answer, no need to change this code\n",
|
||||
"t.q1_check(models)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 2:** Now for each of the models you found in the previous question that can be used for regression problems, import them using sklearn."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Import models from sklearn - notice you will want to use \n",
|
||||
"# the regressor version (not classifier) - googling to find \n",
|
||||
"# each of these is what we all do!\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 3:** Now that you have imported the 4 models that can be used for regression problems, instantate each below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Instantiate each of the models you imported\n",
|
||||
"# For now use the defaults for all the hyperparameters\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 4:** Fit each of your instantiated models on the training data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Fit each of your models using the training data\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 5:** Use each of your models to predict on the test data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Predict on the test values for each model\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 6:** Now for the information related to this lesson. Use the dictionary to match the metrics that are used for regression and those that are for classification."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# potential model options\n",
|
||||
"a = 'regression'\n",
|
||||
"b = 'classification'\n",
|
||||
"c = 'both regression and classification'\n",
|
||||
"\n",
|
||||
"#\n",
|
||||
"metrics = {\n",
|
||||
" 'precision': # Letter here,\n",
|
||||
" 'recall': # Letter here,\n",
|
||||
" 'accuracy': # Letter here,\n",
|
||||
" 'r2_score': # Letter here,\n",
|
||||
" 'mean_squared_error': # Letter here,\n",
|
||||
" 'area_under_curve': # Letter here, \n",
|
||||
" 'mean_absolute_area' # Letter here \n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"#checks your answer, no need to change this code\n",
|
||||
"t.q6_check(metrics)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 6:** Now that you have identified the metrics that can be used in for regression problems, use sklearn to import them."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Import the metrics from sklearn\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 7:** Similar to what you did with classification models, let's make sure you are comfortable with how exactly each of these metrics is being calculated. We can then match the value to what sklearn provides."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def r2(actual, preds):\n",
|
||||
" '''\n",
|
||||
" INPUT:\n",
|
||||
" actual - numpy array or pd series of actual y values\n",
|
||||
" preds - numpy array or pd series of predicted y values\n",
|
||||
" OUTPUT:\n",
|
||||
" returns the r-squared score as a float\n",
|
||||
" '''\n",
|
||||
" sse = np.sum((actual-preds)**2)\n",
|
||||
" sst = np.sum((actual-np.mean(actual))**2)\n",
|
||||
" return 1 - sse/sst\n",
|
||||
"\n",
|
||||
"# Check solution matches sklearn\n",
|
||||
"print(r2(y_test, preds_tree))\n",
|
||||
"print(r2_score(y_test, preds_tree))\n",
|
||||
"print(\"Since the above match, we can see that we have correctly calculated the r2 value.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 8:** Your turn fill in the function below and see if your result matches the built in for mean_squared_error. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def mse(actual, preds):\n",
|
||||
" '''\n",
|
||||
" INPUT:\n",
|
||||
" actual - numpy array or pd series of actual y values\n",
|
||||
" preds - numpy array or pd series of predicted y values\n",
|
||||
" OUTPUT:\n",
|
||||
" returns the mean squared error as a float\n",
|
||||
" '''\n",
|
||||
" \n",
|
||||
" return None # calculate mse here\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Check your solution matches sklearn\n",
|
||||
"print(mse(y_test, preds_tree))\n",
|
||||
"print(mean_squared_error(y_test, preds_tree))\n",
|
||||
"print(\"If the above match, you are all set!\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 9:** Now one last time - complete the function related to mean absolute error. Then check your function against the sklearn metric to assure they match. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def mae(actual, preds):\n",
|
||||
" '''\n",
|
||||
" INPUT:\n",
|
||||
" actual - numpy array or pd series of actual y values\n",
|
||||
" preds - numpy array or pd series of predicted y values\n",
|
||||
" OUTPUT:\n",
|
||||
" returns the mean absolute error as a float\n",
|
||||
" '''\n",
|
||||
" \n",
|
||||
" return None # calculate the mae here\n",
|
||||
"\n",
|
||||
"# Check your solution matches sklearn\n",
|
||||
"print(mae(y_test, preds_tree))\n",
|
||||
"print(mean_absolute_error(y_test, preds_tree))\n",
|
||||
"print(\"If the above match, you are all set!\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 10:** Which model performed the best in terms of each of the metrics? Note that r2 and mse will always match, but the mae may give a different best model. Use the dictionary and space below to match the best model via each metric."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#match each metric to the model that performed best on it\n",
|
||||
"a = 'decision tree'\n",
|
||||
"b = 'random forest'\n",
|
||||
"c = 'adaptive boosting'\n",
|
||||
"d = 'linear regression'\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"best_fit = {\n",
|
||||
" 'mse': # letter here,\n",
|
||||
" 'r2': # letter here,\n",
|
||||
" 'mae': # letter here\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"#Tests your answer - don't change this code\n",
|
||||
"t.check_ten(best_fit)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# cells for work"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -0,0 +1,486 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Boston Housing Data\n",
|
||||
"\n",
|
||||
"In order to gain a better understanding of the metrics used in regression settings, we will be looking at the Boston Housing dataset. \n",
|
||||
"\n",
|
||||
"First use the cell below to read in the dataset and set up the training and testing data that will be used for the rest of this problem."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from sklearn.datasets import load_boston\n",
|
||||
"from sklearn.model_selection import train_test_split\n",
|
||||
"import numpy as np\n",
|
||||
"import tests2 as t\n",
|
||||
"\n",
|
||||
"boston = load_boston()\n",
|
||||
"y = boston.target\n",
|
||||
"X = boston.data\n",
|
||||
"\n",
|
||||
"X_train, X_test, y_train, y_test = train_test_split(\n",
|
||||
" X, y, test_size=0.33, random_state=42)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 1:** Before we get too far, let's do a quick check of the models that you can use in this situation given that you are working on a regression problem. Use the dictionary and corresponding letters below to provide all the possible models you might choose to use."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"That's right! All but logistic regression can be used for predicting numeric values. And linear regression is the only one of these that you should not use for predicting categories. Technically sklearn won't stop you from doing most of anything you want, but you probably want to treat cases in the way you found by answering this question!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# When can you use the model - use each option as many times as necessary\n",
|
||||
"a = 'regression'\n",
|
||||
"b = 'classification'\n",
|
||||
"c = 'both regression and classification'\n",
|
||||
"\n",
|
||||
"models = {\n",
|
||||
" 'decision trees': c,\n",
|
||||
" 'random forest': c,\n",
|
||||
" 'adaptive boosting': c,\n",
|
||||
" 'logistic regression': b,\n",
|
||||
" 'linear regression': a\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"#checks your answer, no need to change this code\n",
|
||||
"t.q1_check(models)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 2:** Now for each of the models you found in the previous question that can be used for regression problems, import them using sklearn."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Import models from sklearn - notice you will want to use \n",
|
||||
"# the regressor version (not classifier) - googling to find \n",
|
||||
"# each of these is what we all do!\n",
|
||||
"from sklearn.tree import DecisionTreeRegressor\n",
|
||||
"from sklearn.ensemble import RandomForestRegressor, AdaBoostRegressor\n",
|
||||
"from sklearn.linear_model import LinearRegression"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 3:** Now that you have imported the 4 models that can be used for regression problems, instantate each below."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Instantiate each of the models you imported\n",
|
||||
"# For now use the defaults for all the hyperparameters\n",
|
||||
"dec_tree = DecisionTreeRegressor()\n",
|
||||
"ran_for = RandomForestRegressor()\n",
|
||||
"ada = AdaBoostRegressor()\n",
|
||||
"lin_reg = LinearRegression()\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 4:** Fit each of your instantiated models on the training data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Fit each of your models using the training data\n",
|
||||
"dec_tree.fit(X_train, y_train)\n",
|
||||
"ran_for.fit(X_train, y_train)\n",
|
||||
"ada.fit(X_train, y_train)\n",
|
||||
"lin_reg.fit(X_train, y_train)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 5:** Use each of your models to predict on the test data."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Predict on the test values for each model\n",
|
||||
"dec_pred = dec_tree.predict(X_test)\n",
|
||||
"ran_pred = ran_for.predict(X_test)\n",
|
||||
"ada_pred = ada.predict(X_test)\n",
|
||||
"lin_pred = lin_reg.predict(X_test)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 6:** Now for the information related to this lesson. Use the dictionary to match the metrics that are used for regression and those that are for classification."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"That's right! Looks like you know your metrics!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# potential model options\n",
|
||||
"a = 'regression'\n",
|
||||
"b = 'classification'\n",
|
||||
"c = 'both regression and classification'\n",
|
||||
"\n",
|
||||
"#\n",
|
||||
"metrics = {\n",
|
||||
" 'precision': b,\n",
|
||||
" 'recall': b,\n",
|
||||
" 'accuracy': b,\n",
|
||||
" 'r2_score': a,\n",
|
||||
" 'mean_squared_error': a,\n",
|
||||
" 'area_under_curve': b, \n",
|
||||
" 'mean_absolute_area': a \n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"#checks your answer, no need to change this code\n",
|
||||
"t.q6_check(metrics)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 6:** Now that you have identified the metrics that can be used in for regression problems, use sklearn to import them."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Import the metrics from sklearn\n",
|
||||
"from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 7:** Similar to what you did with classification models, let's make sure you are comfortable with how exactly each of these metrics is being calculated. We can then match the value to what sklearn provides."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 29,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"r2 manual for dec_pred is 0.7334\n",
|
||||
"r2 sklearn for dec_pred is 0.7334\n",
|
||||
"\n",
|
||||
"r2 manual for ran_pred is 0.8608\n",
|
||||
"r2 sklearn for ran_pred is 0.8608\n",
|
||||
"\n",
|
||||
"r2 manual for ada_pred is 0.7936\n",
|
||||
"r2 sklearn for ada_pred is 0.7936\n",
|
||||
"\n",
|
||||
"r2 manual for lin_pred is 0.7259\n",
|
||||
"r2 sklearn for lin_pred is 0.7259\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"def r2(actual, preds):\n",
|
||||
" '''\n",
|
||||
" INPUT:\n",
|
||||
" actual - numpy array or pd series of actual y values\n",
|
||||
" preds - numpy array or pd series of predicted y values\n",
|
||||
" OUTPUT:\n",
|
||||
" returns the r-squared score as a float\n",
|
||||
" '''\n",
|
||||
" sse = np.sum((actual-preds)**2)\n",
|
||||
" sst = np.sum((actual-np.mean(actual))**2)\n",
|
||||
" return 1 - sse/sst\n",
|
||||
"\n",
|
||||
"# Check solution matches sklearn\n",
|
||||
"models = {'dec_pred': dec_pred, 'ran_pred': ran_pred, 'ada_pred': ada_pred,\n",
|
||||
" 'lin_pred': lin_pred}\n",
|
||||
"metrics = [r2_score, mean_squared_error, mean_absolute_error]\n",
|
||||
"\n",
|
||||
"for i in models:\n",
|
||||
" print(f'r2 manual for {i} is {r2(y_test, models[i]):.4f}')\n",
|
||||
" print(f'r2 sklearn for {i} is {r2_score(y_test, models[i]):.4f}')\n",
|
||||
" print()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 8:** Your turn fill in the function below and see if your result matches the built in for mean_squared_error. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 33,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"r2 manual for dec_pred is 20.1762\n",
|
||||
"r2 sklearn for dec_pred is 20.1762\n",
|
||||
"\n",
|
||||
"r2 manual for ran_pred is 10.5380\n",
|
||||
"r2 sklearn for ran_pred is 10.5380\n",
|
||||
"\n",
|
||||
"r2 manual for ada_pred is 15.6183\n",
|
||||
"r2 sklearn for ada_pred is 15.6183\n",
|
||||
"\n",
|
||||
"r2 manual for lin_pred is 20.7471\n",
|
||||
"r2 sklearn for lin_pred is 20.7471\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"def mse(actual, preds):\n",
|
||||
" '''\n",
|
||||
" INPUT:\n",
|
||||
" actual - numpy array or pd series of actual y values\n",
|
||||
" preds - numpy array or pd series of predicted y values\n",
|
||||
" OUTPUT:\n",
|
||||
" returns the mean squared error as a float\n",
|
||||
" '''\n",
|
||||
" \n",
|
||||
" return np.sum((actual-preds)**2)/len(actual)\n",
|
||||
"\n",
|
||||
"# Check your solution matches sklearn\n",
|
||||
"for i in models:\n",
|
||||
" print(f'r2 manual for {i} is {mse(y_test, models[i]):.4f}')\n",
|
||||
" print(f'r2 sklearn for {i} is {mean_squared_error(y_test, models[i]):.4f}')\n",
|
||||
" print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 9:** Now one last time - complete the function related to mean absolute error. Then check your function against the sklearn metric to assure they match. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 34,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"mae manual for dec_pred is 3.1707\n",
|
||||
"mae sklearn for dec_pred is 3.1707\n",
|
||||
"\n",
|
||||
"mae manual for ran_pred is 2.2222\n",
|
||||
"mae sklearn for ran_pred is 2.2222\n",
|
||||
"\n",
|
||||
"mae manual for ada_pred is 2.7089\n",
|
||||
"mae sklearn for ada_pred is 2.7089\n",
|
||||
"\n",
|
||||
"mae manual for lin_pred is 3.1513\n",
|
||||
"mae sklearn for lin_pred is 3.1513\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"def mae(actual, preds):\n",
|
||||
" '''\n",
|
||||
" INPUT:\n",
|
||||
" actual - numpy array or pd series of actual y values\n",
|
||||
" preds - numpy array or pd series of predicted y values\n",
|
||||
" OUTPUT:\n",
|
||||
" returns the mean absolute error as a float\n",
|
||||
" '''\n",
|
||||
" \n",
|
||||
" return np.sum(np.abs(actual-preds))/len(actual)\n",
|
||||
"\n",
|
||||
"# Check your solution matches sklearn\n",
|
||||
"for i in models:\n",
|
||||
" print(f'mae manual for {i} is {mae(y_test, models[i]):.4f}')\n",
|
||||
" print(f'mae sklearn for {i} is'\n",
|
||||
" f' {mean_absolute_error(y_test, models[i]):.4f}')\n",
|
||||
" print()\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"> **Step 10:** Which model performed the best in terms of each of the metrics? Note that r2 and mse will always match, but the mae may give a different best model. Use the dictionary and space below to match the best model via each metric."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 35,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"That's right! The random forest was best in terms of all the metrics this time!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"#match each metric to the model that performed best on it\n",
|
||||
"a = 'decision tree'\n",
|
||||
"b = 'random forest'\n",
|
||||
"c = 'adaptive boosting'\n",
|
||||
"d = 'linear regression'\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"best_fit = {\n",
|
||||
" 'mse': b,\n",
|
||||
" 'r2': b,\n",
|
||||
" 'mae': b\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"#Tests your answer - don't change this code\n",
|
||||
"t.check_ten(best_fit)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 37,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Comparison of all models:\n",
|
||||
"\n",
|
||||
"r2_score for dec_pred 0.7334\n",
|
||||
"mean_squared_error for dec_pred 20.1762\n",
|
||||
"mean_absolute_error for dec_pred 3.1707\n",
|
||||
"\n",
|
||||
"r2_score for ran_pred 0.8608\n",
|
||||
"mean_squared_error for ran_pred 10.5380\n",
|
||||
"mean_absolute_error for ran_pred 2.2222\n",
|
||||
"\n",
|
||||
"r2_score for ada_pred 0.7936\n",
|
||||
"mean_squared_error for ada_pred 15.6183\n",
|
||||
"mean_absolute_error for ada_pred 2.7089\n",
|
||||
"\n",
|
||||
"r2_score for lin_pred 0.7259\n",
|
||||
"mean_squared_error for lin_pred 20.7471\n",
|
||||
"mean_absolute_error for lin_pred 3.1513\n",
|
||||
"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# cells for work\n",
|
||||
"\n",
|
||||
"models = {'dec_pred': dec_pred, 'ran_pred': ran_pred, 'ada_pred': ada_pred,\n",
|
||||
" 'lin_pred': lin_pred}\n",
|
||||
"metrics = [r2_score, mean_squared_error, mean_absolute_error]\n",
|
||||
"\n",
|
||||
"print('Comparison of all models:\\n')\n",
|
||||
"for i in models:\n",
|
||||
" for j in range(len(metrics)):\n",
|
||||
" print(f'{metrics[j].__name__} for '\n",
|
||||
" f'{i} {metrics[j](y_test, models[i]):.4f}')\n",
|
||||
" print()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
Binary file not shown.
@@ -0,0 +1,143 @@
|
||||
from sklearn.datasets import load_boston
|
||||
from sklearn.model_selection import train_test_split
|
||||
import numpy as np
|
||||
import tests2 as t
|
||||
from sklearn.tree import DecisionTreeRegressor
|
||||
from sklearn.ensemble import RandomForestRegressor, AdaBoostRegressor
|
||||
from sklearn.linear_model import LinearRegression
|
||||
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
|
||||
|
||||
boston = load_boston()
|
||||
y = boston.target
|
||||
X = boston.data
|
||||
|
||||
X_train, X_test, y_train, y_test = train_test_split(
|
||||
X, y, test_size=0.33, random_state=42)
|
||||
|
||||
|
||||
dec_tree = DecisionTreeRegressor()
|
||||
ran_for = RandomForestRegressor()
|
||||
ada = AdaBoostRegressor()
|
||||
lin_reg = LinearRegression()
|
||||
|
||||
|
||||
dec_tree.fit(X_train, y_train)
|
||||
ran_for.fit(X_train, y_train)
|
||||
ada.fit(X_train, y_train)
|
||||
lin_reg.fit(X_train, y_train)
|
||||
|
||||
|
||||
dec_pred = dec_tree.predict(X_test)
|
||||
ran_pred = ran_for.predict(X_test)
|
||||
ada_pred = ada.predict(X_test)
|
||||
lin_pred = lin_reg.predict(X_test)
|
||||
|
||||
|
||||
# potential model options
|
||||
a = 'regression'
|
||||
b = 'classification'
|
||||
c = 'both regression and classification'
|
||||
|
||||
metrics_dict = {
|
||||
'precision': b,
|
||||
'recall': b,
|
||||
'accuracy': b,
|
||||
'r2_score': a,
|
||||
'mean_squared_error': a,
|
||||
'area_under_curve': b,
|
||||
'mean_absolute_area': a
|
||||
}
|
||||
|
||||
# checks your answer, no need to change this code
|
||||
t.q6_check(metrics_dict)
|
||||
print()
|
||||
|
||||
models = {'dec_pred': dec_pred, 'ran_pred': ran_pred, 'ada_pred': ada_pred,
|
||||
'lin_pred': lin_pred}
|
||||
metrics = [r2_score, mean_squared_error, mean_absolute_error]
|
||||
|
||||
|
||||
# Check r2
|
||||
def r2(actual, preds):
|
||||
'''
|
||||
INPUT:
|
||||
actual - numpy array or pd series of actual y values
|
||||
preds - numpy array or pd series of predicted y values
|
||||
OUTPUT:
|
||||
returns the r-squared score as a float
|
||||
'''
|
||||
sse = np.sum((actual - preds)**2)
|
||||
sst = np.sum((actual - np.mean(actual))**2)
|
||||
return 1 - sse / sst
|
||||
|
||||
|
||||
for i in models:
|
||||
print(f'r2 manual for {i} is {r2(y_test, models[i]):.4f}')
|
||||
print(f'r2 sklearn for {i} is {r2_score(y_test, models[i]):.4f}')
|
||||
print()
|
||||
# Check solution matches sklearn
|
||||
|
||||
|
||||
def mse(actual, preds):
|
||||
'''
|
||||
INPUT:
|
||||
actual - numpy array or pd series of actual y values
|
||||
preds - numpy array or pd series of predicted y values
|
||||
OUTPUT:
|
||||
returns the mean squared error as a float
|
||||
'''
|
||||
|
||||
return np.sum((actual - preds)**2) / len(actual)
|
||||
|
||||
|
||||
# Check your solution matches sklearn
|
||||
for i in models:
|
||||
print(f'mse manual for {i} is {mse(y_test, models[i]):.4f}')
|
||||
print(f'mse sklearn for {i} is'
|
||||
f' {mean_squared_error(y_test, models[i]):.4f}')
|
||||
print()
|
||||
|
||||
|
||||
def mae(actual, preds):
|
||||
'''
|
||||
INPUT:
|
||||
actual - numpy array or pd series of actual y values
|
||||
preds - numpy array or pd series of predicted y values
|
||||
OUTPUT:
|
||||
returns the mean absolute error as a float
|
||||
'''
|
||||
|
||||
return np.sum(np.abs(actual - preds)) / len(actual)
|
||||
|
||||
|
||||
# Check your solution matches sklearn
|
||||
for i in models:
|
||||
print(f'mae manual for {i} is {mae(y_test, models[i]):.4f}')
|
||||
print(f'mae sklearn for {i} is'
|
||||
f' {mean_absolute_error(y_test, models[i]):.4f}')
|
||||
print()
|
||||
|
||||
print('=================')
|
||||
print('Comparison of all models:\n')
|
||||
for i in models:
|
||||
for j in range(len(metrics)):
|
||||
print(f'{metrics[j].__name__} for '
|
||||
f'{i} {metrics[j](y_test, models[i]):.4f}')
|
||||
print()
|
||||
|
||||
|
||||
# match each metric to the model that performed best on it
|
||||
a = 'decision tree'
|
||||
b = 'random forest'
|
||||
c = 'adaptive boosting'
|
||||
d = 'linear regression'
|
||||
|
||||
|
||||
best_fit = {
|
||||
'mse': b,
|
||||
'r2': b,
|
||||
'mae': b
|
||||
}
|
||||
|
||||
# Tests your answer - don't change this code
|
||||
t.check_ten(best_fit)
|
||||
@@ -0,0 +1,94 @@
|
||||
def q1_check(models_dict):
|
||||
'''
|
||||
INPUT:
|
||||
models_dict - a dictionary with models and what types of problems the models can be used for
|
||||
|
||||
OUTPUT:
|
||||
nothing returned
|
||||
prints statements related to the correctness of the dictionary
|
||||
'''
|
||||
a = 'regression'
|
||||
b = 'classification'
|
||||
c = 'both regression and classification'
|
||||
|
||||
models = {
|
||||
'decision trees': c,
|
||||
'random forest': c,
|
||||
'adaptive boosting': c,
|
||||
'logistic regression': b,
|
||||
'linear regression': a,
|
||||
}
|
||||
|
||||
if models == models_dict:
|
||||
print("That's right! All but logistic regression can be used for predicting numeric values. And linear regression is the only one of these that you should not use for predicting categories. Technically sklearn won't stop you from doing most of anything you want, but you probably want to treat cases in the way you found by answering this question!")
|
||||
|
||||
if models['logistic regression'] != models_dict['logistic regression']:
|
||||
print("Oops! In most cases, you will only want to use logistic regression for classification problems.")
|
||||
|
||||
if models['linear regression'] != models_dict['linear regression']:
|
||||
print("Oops! Linear regression should actually only be used in regression cases. Try again.")
|
||||
|
||||
if (models['decision trees'] != models_dict['decision trees']) or (models['random forest'] != models_dict['random forest']) or (models['adaptive boosting'] != models_dict['adaptive boosting']):
|
||||
print("Oops! Actually random forests, decision trees, and adaptive boosting are all techniques that can be used for both regression and classification. Try again!")
|
||||
|
||||
|
||||
|
||||
|
||||
def q6_check(metrics):
|
||||
'''
|
||||
INPUT:
|
||||
metrics - a dictionary with metrics and what types of problems the metrics can be used for
|
||||
|
||||
OUTPUT:
|
||||
nothing returned
|
||||
prints statements related to the correctness of the dictionary
|
||||
'''
|
||||
a = 'regression'
|
||||
b = 'classification'
|
||||
c = 'both regression and classification'
|
||||
|
||||
#
|
||||
metrics_ch = {
|
||||
'precision': b,
|
||||
'recall': b,
|
||||
'accuracy': b,
|
||||
'r2_score': a,
|
||||
'mean_squared_error': a,
|
||||
'area_under_curve': b,
|
||||
'mean_absolute_area': a
|
||||
}
|
||||
|
||||
if metrics_ch == metrics:
|
||||
print("That's right! Looks like you know your metrics!")
|
||||
|
||||
if (metrics['precision'] != metrics['precision']) or (metrics['recall'] != metrics['recall']) or (metrics['accuracy'] != metrics['accuracy']) or (metrics['area_under_curve'] != metrics['area_under_curve']):
|
||||
print("Oops! Actually, there are four metrics that are used for classification. Looks like you missed at least one of them.")
|
||||
|
||||
if metrics != metrics_ch:
|
||||
print("Oops! Something doesn't look quite right. You should have three metrics for regression, and the others should be for classification. None of the metrics are used for both regression and classification.")
|
||||
|
||||
|
||||
def check_ten(best_fit):
|
||||
'''
|
||||
INPUT:
|
||||
|
||||
OUTPUT:
|
||||
|
||||
'''
|
||||
a = 'decision tree'
|
||||
b = 'random forest'
|
||||
c = 'adaptive boosting'
|
||||
d = 'linear regression'
|
||||
|
||||
|
||||
best_fitting = {
|
||||
'mse': b,
|
||||
'r2': b,
|
||||
'mae': b
|
||||
}
|
||||
|
||||
if best_fit == best_fitting:
|
||||
print("That's right! The random forest was best in terms of all the metrics this time!")
|
||||
|
||||
else:
|
||||
print("Oops! Actually the best model was the same for all the metrics. Try again - all of your answers should be the same!")
|
||||
Reference in New Issue
Block a user