Skip to content

Add SHAP, LIME, InterpretML under ML and DGL under DL #2663

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

YADIDidiah24
Copy link

What is this Python project?

This PR adds SHAP, LIME, and InterpretML under Machine Learning and DGL under Deep Learning.

  1. SHAP (SHapley Additive exPlanations) provides game-theory-based feature explanations for ML models.
  2. LIME (Local Interpretable Model-agnostic Explanations) explains predictions locally by analyzinginputs.
  3. InterpretML is a framework for training interpretable models and explaining black-box systems.
    DGL (Deep Graph Library) is a flexible framework designed for deep learning on graphs.

What's the difference between this Python project and similar ones?

Enumerate comparisons.

These libraries are widely used for Explainable AI (XAI) and interpretable machine learning, distinguishing them from general-purpose ML libraries.

DGL vs. Other DL Frameworks: DGL specializes in graph neural networks (GNNs), unlike PyTorch or TensorFlow, which focus on general deep learning.

Anyone who agrees with this pull request could submit an Approve review to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant