SHAP & Game Theory For Recommendation Systems

Download Slides

Shap for recommendation systems: How to use existing Machine Learning models as a recommendation system. We introduce a game-theoretic approach to the study of recommendation systems with strategic content providers. Such systems should be fair and stable. Showing that traditional approaches fail to satisfy these requirements, we propose the Shapley mediator. We show that the Shapley mediator fulfills the fairness and stability requirements, runs in linear time, and is the only economically efficient mechanism satisfying these properties.

Speaker: Avi Ben Yossef

Transcript

– Hey guys, let me share with you our research that we made for building a recommendation system by using our existing models that already have in our organization. So let’s talk a little bit about myself. I’m currently a Director, working as a Director of Data Science at first aid at First Digital Bank. It’s a new bank in Israel that maybe you are going to hear about in the near future. Little bit about myself I have 10 years of experience in building various types of systems. Machine learning, deep learning, recommendation systems, big data, and AI. So little bit about the agenda here I will talk. So we will first of all, maybe you heard about SHAP before in the past. But we will talk about the different types of applications, different types of use cases that we are using SHAP in our system, and in other types of usages, in different types of systems today. After it, we’ll talk about a little bit about the concept the theory behind SHAP and of course Game Theory. Little bit deep dive for SHAP, and then we’ll see how we can how what is the architecture, the right architecture for these type of applications, for building a recommendation system by using SHAP. We will see different types of competitors and advantages that SHAP has comparing to other solution that you may use in different types of systems. And in the end little bit short demo how to use it. So let’s start little bit about SHAP applications or why we’re going to use it. Today SHAP is mainly used for explainable models to explain the models the predictions that our machine learning models give for example, in Sagemaker AWS Sagemaker they’re using SHAP to explain the predictions in as a debug as a debugger tool for the models that are built by using these platform. So the main usage today for SHAP is to explain our predictions, to explain for example, run the foremost model to explain the trait, explain the predict. That was made by the model. And our research is we start how to use this kind of system, this kind of technology to help us build recommendation system by using our existing models. For example models that we had in the world of advertisements. Click-through rate that predicts the possibility of a click on our advertisement solutions. So the goal was to bring the recommendation systems for campaign for advertisement campaigns to recommend on their wide preferences of a campaign for future campaign. So we have different kinds of models like CTR, click-through rate, the BTF models and so on. And we thought how we can use it with SHAP for building recommendation system for our new customers to build new components. So before we are jumping to our solution Little bit about Game Theory and SHAP. So in the example there is Alice, Bob and Celine that is going to share a meal in a restaurant. And they every time we are in the history of both three persons, we have the history of the meals and how much they paid for every meal. When Alice got a go ,went alone for restaurant and eat for eight $80 or Bob that went and eat in $56 and so on. So, and of course the combinations. The great stuff here that we are going to see all we are going to use all those kinds of permutations from the history and replace every time the order and usage of the historical data to build all the permutations all the possible permutations from the history. You can see it in the rights. Sometimes we can use if it’s ABC, Alice, Bob, and Celine. So we, first of all, take of historical row of Alice spent $80 in the restaurant and the last one A, B, and C. And we will see that. And of course, A B with $80 and ABC with the $90. So, in the first row in the right side, you can see it’s 80, for Bob it’s zero because they have a row that tell us that A and B also spend together 80. And the $10 left is to get to 90 is the last row in the historical data. So we are doing all the possible permutations from the historical data placed replacing every time. One of the features, one of the spent of every person in the restaurant by the historical data. And in the end we’re doing like an average to calculate the SHAP value the Shapley value of every feature here, is the how much the every person spent in the restaurant. So in the end, after doing an average for every person we’re getting the SHAP value for every person, that’s, it’s going to represent the amount of money they are going to, maybe spend in the future. if they’re going to go together to the restaurant in the future. So at the last line here in the red and blue is to say, “okay, they’re going to maybe spend like $24 in the future if they’re going to go together. But every behavior of a person then can change the results of this kind of prediction.” So we are getting not only the feature importance but also the value importance of this kind of play this kind of game with all the players in the restaurant. So with SHAP we can, comparing for other solution to explain the models, for example feature importance, in the XGboost Model API. We are getting here not only the feature importance how much is important every feature, for example the age of the of Bob and Alice or if Bob and Alice went to a restaurant without Celine, something like that. So we are not getting also this but also the values, how much important how much is important that Alice came and how much she spent. And from this, we can understand the explanation. To explain the future prediction. Also, if it didn’t happen in the past. So in SHAP, we are from the other under the scenes, we are trying to replace every feature every value every attribute for every feature every time in our SHAP Shapley values calculation, and to find we’re trying to change one attribute in one feature and trying to understand the importance and value importance of this prediction. So we are not predicting only the past based on the past. We are going to get in the future possibilities, that combinations of features that maybe didn’t happen in the past. It’s very important for recommendation systems not only, to be based on the past but also try to estimate the importance and maybe the health of new combinations and maybe recommend on new combinations of features that may be not happened in the past but it’s possible the possible for them to happen. And, we have enough information to estimate how this change going to affect all the society of features or the combinations of features. So little bit about the types of architectures by for using SHAP. So we need to pass when we’re reading the model we are preparing the data set for the model and of course doing the training, validation and so on and doing the predictions. So for explaining the model for explaining the predictions of a model we are supposed to do pass to SHAP, the data, the plain data that we based on for claiming the model and the model itself. By passing those two types of data and model we are calculating all the Shapley values for all the features in the model. And can of course explain that the predictions that we did with the model. So the output is going to be the feature importance and the attribute importance of every prediction. Every attribute is going to not only understand how much it’s important it’s for better or the worse to understand how much it affect this. For example, how much did this feature got us to bad results for a specific campaign. While it how much help us to improve the results of a specific campaign. So we can, for example, after calculating Shapley values, we can build in service an 8.4 explain the our, explain our predictions or using our model and data to build the service for our recommendation system to recommend on the right features and right attributes for a specific campaign. We can filter out all, we can filter out the preferences of the campaign. For example, if a customer told us that he wants to advertise in California for specific hours a day in specific publishers we can just suggest him, the right all the other preferences that’s going to help him get better results in his campaign. For example suggesting one a specific demographics of customers and specific custom demographics that can help him to get better results by using his past data, we are doing predictions. We are training on his past campaigns and we are learning from all the combinations in the past. Exactly that we did with Alice and Bob. And right now we are trying to predict not only to predict only the combinations that we had in the past, but also SHAP is going to help us to find the best combination that also didn’t happen in the past. And we can now recommend for the campaign manager to target and create those types of combinations in the future. So we are starting to build here it’s We’re starting to build recommendation systems, by using SHAP, our existing model and existing plain data. We are familiar we had have been familiar with different types of solution to calculate the info gain to calculate the future importance. Like I said before, the XGBoost Info Gain API and also the another open source project name Lime for explaining models. The big advantages of SHAP is the consistent consistency in the results and the accuracy of the results. It’s not, it’s much less, can be affected by quick changes in the model, by changes in the, by the changes of the values in the model. It’s very consist with its results. It can be, we can talk about it, like in a more modern now and I want to explain why, but different I also added here different type of papers that explain exactly this issue. So another main advantages for SHAP is that it support about different types of models. Also deep-learning models, Keras models, XGBoost models and scikit-learn. So here we have like a quick demo of the code that help us to create in very few lines of code. All the Shapley values that we talked about. We used first we, first of all, filtered out all the historical data of a specific campaign specific customer for more his past, from all his past campaigns. We filled the data, train the model on this data, send the data and the claimed model to SHAP created in SHAP, Shapley values for all of historical data and created all types of combinations. Also for combinations of features and values that didn’t happened in the past data but happened in different types of rows in the historical data So after it we filter out all the combinations that is going to improve the results, for example of click-through rate, to get a higher percentage of clicks compared to impressions by extending the most important and moving features and combinations of features, not only a specific feature, it’s more popular for example in BI in business intelligence to the graph on specific features with all its values and say, “okay for these types of values is going to help us improve the results of the campaign.” Here we are recommending on combinations of features and those type of combinations is going to be very targeted combinations and very valuable. And they’re much more accurate than recommend on specific value for a specific feature. So this is the main advantage of SHAP, how we can use it for bidding, not also tools for explain the models, but also recommendation systems for our, for by using our existing models. So here is like the GitHub for SHAP it’s open-source we can use it. And two other examples of papers and blockbusts that talks about it talks about different types of usages and for SHAP for example, explaining and recommending for a bank how to which credit score to give a specific customer and how to explain it for the customer, why he got this kind of credit score. And it can help us a lot maybe also today when I’m working managing the data science group in our is our Israel bank. So I think here we talked about, it’s like short talk, but I think that we covered the possibilities of using SHAP in different kinds of use cases from explaining models for building recommendation systems and I hope it will help you to test your existing models to think how you can build for your customers with this recommendation system that can help them managing and improving their results. So that’s all happy to get the feedback. Thank you guys.


 
Watch more Data + AI sessions here
or
Try Databricks for free
« back
About Avi Ben Yossef

First Digital Bank

Director of data science and AI, Big Data & Machine Learning Expert, with over 10 years of experience in building various systems, both from the field of machine learning, recommendation, Big Data, and optimization systems.

Lead of data science team responsible to develop algorithms for solving diverse business challenges, by designing, implementing and developing a unique research operation, ML methods & infrastructure.