PinnedPublished inTDS ArchiveIntroduction to SHAP with PythonHow to create and interpret SHAP plots: waterfall, force, mean SHAP, beeswarm and dependenceDec 19, 202111Dec 19, 202111
PinnedPublished inTDS Archive7 Lessons from an ML Internship at IntelAutomation, machine learning and LLMs in the chip industryFeb 12, 20248Feb 12, 20248
PinnedPublished inTDS ArchiveWhat is Explainable AI (XAI)?An introduction to XAI— the field aimed at making machine learning models understandable to humansSep 19, 20221Sep 19, 20221
PinnedPublished inTDS ArchiveThe Ultimate Guide to PDPs and ICE PlotsThe intuition, maths and code (R and Python) behind partial dependence plots and individual conditional expectation plotsJun 28, 20221Jun 28, 20221
PinnedPublished inTDS ArchiveAnalysing Fairness in Machine LearningDoing an exploratory fairness analysis and measuring fairness using equal opportunity, equalized odds and disparate impactApr 29, 20221Apr 29, 20221
Published inTDS ArchiveThe Accuracy vs Interpretability Trade-off Is a LieWhy, if we look at the bigger picture, black-box models are not more accurateOct 16, 20244Oct 16, 20244
Published inTDS ArchiveEvaluating Edge Detection? Don’t Use RMSE, PSNR or SSIMEmpirical and theoretical evidence for why Figure of Merit (FOM) is the best edge-detection evaluation metricOct 8, 20241Oct 8, 20241
Published inTDS ArchiveExplaining Anomalies with Isolation Forest and SHAPIsolation Forest is an unsupervised, tree-based anomaly detection method. See how both KernelSHAP and TreeSHAP can be used to explain its…Sep 30, 20246Sep 30, 20246
Published inTDS ArchiveDo We Really Need Deep Learning for Coastal Monitoring?An in-depth exploration of how machine learning stacks up against traditional coastal erosion monitoring methodsSep 10, 20243Sep 10, 20243
Published inTDS ArchiveA Deep Dive on LIME for Local InterpretationsThe intuition, theory, and code for Local Interpretable Model-agnostic Explanations (LIME)Jun 26, 2024Jun 26, 2024