Kaggle.com is one of the leading platforms for predictive modelling and analytics competitions. Companies and researchers are able to post data sets and real world challenges which invite statisticians and data scientists to compete in building predictive models that best describe future outcomes.
Business Need
Porto Seguro, one of Brazil’s largest auto and homeowner insurance companies, posted a challenge on Kaggle.com to build a model that predicts the probability that a driver will initiate an auto insurance claim within the next year. While Porto Seguro has used machine learning for the past 20 years, they’re looking to Kaggle’s machine learning community to explore new, more powerful methods. A more accurate prediction will allow them to further tailor their prices, and hopefully make auto insurance coverage more accessible to more drivers.
Akvelon’s participation
A group of two Akvelon machine learning engineers and a data scientist enlisted on Kaggle.com decided to compete side-by-side with more than 5,000 teams for the top positions in the leaderboard. The competition submissions are evaluated using Normalized Gini Coefficient. The Gini Coefficient ranges from approximately 0 for random guessing, to approximately 0.5 for a perfect score.
The insurance group provided the same data set to all competition participants as a raw data set with 893,000 rows of 59 distinct data points, with little or no specification as to what the data actually represents. The Akvelon team went through several different iterations to identify the meaning of each of those data points, to cleanse the data and to reduce the number of variables to a manageable subset. If the data model has too many data points with little or no impact on the output (event of the driver filing insurance claim) the resulting model becomes too complex, and has an adverse impact on the performance to generate the proper output in a timely manner.
The work involved in crowdsourcing competitions requires lots of: “try, fail, fail fast then make some changes and try again, repeat the entire process over and over” approaches. For example, the team experimented with several prediction models such as: Random forest, Logistic regression, Extra tree classifier, KNN, Naïve Bayes, Adaboost, Xgboost, Lightgmb, Catboost, Neural networks.
‘Competitions requires lots of: “try, fail, fail fast then make some changes and try again, repeat the entire process over and over” approaches’
One of the chosen approaches was the t-SNE (t-distributed stochastic neighbor embedding) which is a nonlinear dimensionality reduction technique that is particularly well-suited for embedding high-dimensional data into a space of two or three dimensions, which can then be visualized in a scatter plot. Specifically, it models each high-dimensional object by a two or three dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points. The image below shows the result of this analysis.
The most promising alternatives were XGBoost and LightGBM which placed the team onto the top half of the competition leaderboard with a score of ~0.282. The top ranked team had a score of 0.290. Akvelon ‘s team score was 97.2% of the top ranked team. (The higher the score the better, with the maximum value of 0.5). The next step was to adjust the data model parameters such as (depth of the tree, number of estimators, size of the train and validation sets). Those experimentations raised the score to about ~0.284. With a model that was promising, it was the right time to go back and revisit the groomed data set and do a better approach in choosing which data point to include and which one to replace.
Further research and investigations helped the Akvelon team improve the models. Some of the improvements involved including a Neural Network which raised the score to ~0.285, placing the team in the top 2% of the entire competition.
After some additional tweaks, adjustments and learning from the past Kaggle winner teams approaches, the Akvelon team ranked in the top 1% with the score of 0.287 against the leader having 0.291 (98.6% from the top scoring team).
At the final stage of the competition, the organizers used 70% of the complete data set, unseen yet by any of the competitors. Each teams result against this new data set will determine the final score in the Leaderboard.
The Akvelon Team placed 22nd out of more than 5,000 teams in the final standings after applying the complete data set. Finishing the competition in the top 0.5%.
Benefits and Results
Participating in the Porto Seguro Safe Driver prediction competition increased our expertise in the Machine Learning field. The trial and error portion of this project helped us refine data quickly, focusing on accuracy and details in every model. The data also provided a real world data set for a real world issue, so our models could be applied to different situations and everyday occurrences.
Finishing in the top 0.5% is a proud accomplishment when competing against more than 5,000 other teams.
If you would like to learn more about our Machine Learning capabilities and solutions, contact us today! The Akvelon Team placed 22nd out of more than 5,000 teams in the final standings after applying the complete data set. Finishing the competition in the top 0.5%.