PinnedPublished inTowards Data ScienceIntroduction to SHAP with PythonHow to create and interpret SHAP plots: waterfall, force, mean SHAP, beeswarm and dependenceDec 19, 202111Dec 19, 202111
PinnedPublished inTowards Data Science7 Lessons from an ML Internship at IntelAutomation, machine learning and LLMs in the chip industryFeb 129Feb 129
PinnedPublished inTowards Data ScienceWhat is Explainable AI (XAI)?An introduction to XAI— the field aimed at making machine learning models understandable to humansSep 19, 20221Sep 19, 20221
PinnedPublished inTowards Data ScienceThe Ultimate Guide to PDPs and ICE PlotsThe intuition, maths and code (R and Python) behind partial dependence plots and individual conditional expectation plotsJun 28, 20221Jun 28, 20221
PinnedPublished inTowards Data ScienceAnalysing Fairness in Machine LearningDoing an exploratory fairness analysis and measuring fairness using equal opportunity, equalized odds and disparate impactApr 29, 20221Apr 29, 20221
Published inTowards Data ScienceThe Accuracy vs Interpretability Trade-off Is a LieWhy, if we look at the bigger picture, black-box models are not more accurateOct 163Oct 163
Published inTowards Data ScienceEvaluating Edge Detection? Don’t Use RMSE, PSNR or SSIMEmpirical and theoretical evidence for why Figure of Merit (FOM) is the best edge-detection evaluation metricOct 81Oct 81
Published inTowards Data ScienceExplaining Anomalies with Isolation Forest and SHAPIsolation Forest is an unsupervised, tree-based anomaly detection method. See how both KernelSHAP and TreeSHAP can be used to explain its…Sep 306Sep 306
Published inTowards Data ScienceDo We Really Need Deep Learning for Coastal Monitoring?An in-depth exploration of how machine learning stacks up against traditional coastal erosion monitoring methodsSep 103Sep 103
Published inTowards Data ScienceA Deep Dive on LIME for Local InterpretationsThe intuition, theory, and code for Local Interpretable Model-agnostic Explanations (LIME)Jun 26Jun 26