diff --git a/ml/cc/exercises/linear_regression_taxi.ipynb b/ml/cc/exercises/linear_regression_taxi.ipynb index 38d3d4b..1f54b63 100644 --- a/ml/cc/exercises/linear_regression_taxi.ipynb +++ b/ml/cc/exercises/linear_regression_taxi.ipynb @@ -264,7 +264,7 @@ "print(\"How many cab companies are in the dataset? \t\tAnswer: {number}\".format(number = num_unique_companies))\n", "\n", "# What is the most frequent payment type?\n", - "most_freq_payment_type = training_df['PAYMENT_TYPE'].value_counts().idxmax()\n", + "most_freq_payment_type = training_df['PAYMENT_TYPE'].mode()[0]\n", "print(\"What is the most frequent payment type? \t\tAnswer: {type}\".format(type = most_freq_payment_type))\n", "\n", "# Are any features missing data?\n", @@ -526,7 +526,8 @@ " epochs = 20\n", " batch_size = 50\n", "\n", - "it takes about 5 epochs for the training run to converge to the final model.\n", + "it takes about 6 epochs for the training run to approach convergence (RMSE\n", + "under 4).\n", "\"\"\"\n", "print(answer)\n", "\n", @@ -534,6 +535,7 @@ "# -----------------------------------------------------------------------------\n", "answer = '''\n", "It appears from the model plot that the model fits the sample data fairly well.\n", + "Specificlly, the final RMSE of $3.82 represents around 16% of the mean FARE.\n", "'''\n", "print(answer)" ], @@ -618,21 +620,20 @@ "# How did lowering the learning rate impact your ability to train the model?\n", "# -----------------------------------------------------------------------------\n", "answer = '''\n", - "When the learning rate is too small, it may take longer for the loss curve to\n", - "converge. With a small learning rate the loss curve decreases slowly, but does\n", - "not show a dramatic drop or leveling off. With a small learning rate you could\n", - "increase the number of epochs so that your model will eventually converge, but\n", - "it will take longer.\n", + "When the learning rate is too small, it takes longer for the loss curve to\n", + "converge. The loss curve decreases slowly, but does not show a dramatic drop or\n", + "leveling off. You could increase the number of epochs so that your model will\n", + "eventually converge, but it will take longer.\n", "'''\n", "print(answer)\n", "\n", "# Did changing the batch size effect your training results?\n", "# -----------------------------------------------------------------------------\n", "answer = '''\n", - "Increasing the batch size makes each epoch run faster, but as with the smaller\n", - "learning rate, the model does not converge with just 20 epochs. If you have\n", - "time, try increasing the number of epochs and eventually you should see the\n", - "model converge.\n", + "Increasing the batch size makes each epoch run faster, but this implies less\n", + "learning iterations per epoch. As with the smaller learning rate, the model\n", + "does not converge with just 20 epochs. If you try increasing the number of\n", + "epochs, eventually you should see the model converge.\n", "'''\n", "print(answer)" ],