InTDS ArchivebyRodrigo SilvaExploring Feature Extraction with CNNsUsing a Convolutional Neural Network to check specialization in feature extractionNov 25, 20231Nov 25, 20231
PacmedExplainability for tree-based models: which SHAP approximation is best?Understanding TreeSHAP algorithms’s failure modesJan 12, 2022Jan 12, 2022
InTDS ArchivebyCharu MakhijaniExplainable AI: Unfold the BlackboxBuild trust in machine learning with XAI, Guide to SHAP & SHapley ValuesMay 26, 20223May 26, 20223
InTDS ArchivebySalih SalihThree Interpretability Methods to Consider When Developing Your Machine Learning ModelIntroduction to SHAP, LIME, and Anchors, with examples, similarities, pros and cons.Mar 4, 20221Mar 4, 20221
InTDS ArchivebySalih SalihUnderstanding Machine Learning InterpretabilityIntroduction to machine learning interpretability, driving forces, taxonomy, example, and notes on interpretability assessment.Jan 26, 20222Jan 26, 20222
InTDS ArchivebyJosé PadarianExplaining a CNN generated soil map with SHAPUsing SHAP to corroborate that the digital soil mapping CNN is capturing sensible relationships.Apr 7, 20203Apr 7, 20203
Dr. Holger BartelExplainable AI Insurance RatingsInsurance is business is complex — let’s make it explainableJun 7, 2022Jun 7, 2022
InArtificial Intelligence in Plain EnglishbyChau PhamUnderstanding SHAP for Interpretable Machine LearningIn this post, we will get the idea of Shapley value, why the order of features matter, how to move from Shapley value to SHAP, the story…Aug 27, 20203Aug 27, 20203
InBecoming Human: Artificial Intelligence MagazinebyMobiDevUsing Explainable AI in Decision-Making ApplicationsThere is no instruction to a decision making process. However, important decisions are usually made by analyzing tons of data to find the…Jun 13, 2022Jun 13, 2022
InTDS ArchivebyNoga Gershon BarakInterpretML: Another Way to Explain Your ModelAn overview of the InterpretML package, which offers new explainability tools along side existing ones.Nov 17, 20212Nov 17, 20212
Dave Cote, M.Sc.Demonstrating the power of feature engineering — Part II: How I beat XGBoost with Linear Regression!Yes, you read correctly!Feb 8, 20226Feb 8, 20226
InTDS ArchivebyVinícius TrevisanBoruta SHAP: an amazing tool for feature selection every data scientist should knowHow we can use Boruta and SHAP to build an amazing feature selection process — with python examplesJan 25, 20224Jan 25, 20224
InThe StartupbyUla La ParisPush the limits of explainability — an ultimate guide to SHAP libraryThis article is a guide to the advanced and lesser-known features of the python SHAP library. It is based on an example of tabular data…Jun 5, 20205Jun 5, 20205
InTDS ArchivebyAneesh BoseParallelize your massive SHAP computations with MLlib and PySparkA stepwise guide for efficiently explaining your models using SHAP.May 28, 20225May 28, 20225
Gaurav AgarwalExplainable AI: When life gives you LIME, make mojito“I tested my ML model on the cross validation data set and it generated outstanding metrics. There is no way it will not perform well in…Jun 5, 2022Jun 5, 2022