Introduction
Thank you for being a very early adopter of our interpretability engine!
The Leap Interpretability Engine enables you to extract insights from your model to understand what it has truly learned. With this you can find hidden failure modes or spurious correlations and biases before deployment.
Everything is run locally so you don't need to upload any proprietary data or models. Simply install our python library and the results will be uploaded into our dashboard.
Our existing library supports computer vision models for the following use-cases:
Classification
Segmentation
Leap can be used to:
Predict Failure Modes Sanity check your model to ensure it hasn't learned any features that will cause unwanted behavior
Perform Targeted Fine-Tuning Find problematic features to inform your model debugging process
Improve Trust Generate visual artifacts to demonstrate model quality to internal and external stakeholders
To find out how, try our Quick Start guide