Use the package manager pip to install leap-ie.
To authenticate with the leap-ie
library you will need an API key from the leap dashboard. Head over to the dashboard and create an account:
When first signing in, you will be prompted to generate an API key. Keep this key safe as it will only be displayed once.
If you lose your API key, a new one can be generated in the dashboard settings page.
A user can only have one active API key at a time, so generating a new key will invalidate the old one.
To quickly try out the leap-ie
library on a prebuilt model we provide easy access to all image classification torchvision models via leap_ie.models.get_model
.
The get_model
function takes the name of any image classification torchvision model. We can also automatically pull image classification models from HuggingFace:
The get_model
function returns a tuple with 3 values:
preprocessing_fn
The preprocessing function used on inputs for inference.
model
The model.
class_list
List of class names corresponding to the model's output classes.
To run the Leap Interpretability Engine on this model, simply pass these values and some additional configuration to the engine.generate
function:
In the example above we use the following properties:
project_name
Used to group multiple runs in the Leap Dashboard. This can be any string.
model
The model value output from get_model
.
class_list
The class list value output from get_model
.
config
A config dictionary containing your generated Leap API key
target_classes
The indexes of classes from the class list you want to analyse with Leap.
preprocessing
The preprocessing function output from get_model
.
For a full list of arguments and config values see API Reference.
On running this function, the results of your run will start to appear in the Leap Dashboard. Results will show immediately and automatically refresh as the run continues.
We also support logging directly to Weights and Biases. See Integrations.
If you want to analyse results locally, engine.generate
also returns a DataFrame and Dictionary:
results_df
contains the results of results of prototype, isolation and entanglement analyses including relative paths to image files in the local filesystem. For more information on the different types of results see Concepts.
results_dict
contains probability values at each stage of the generation process. This is only required for advanced debugging use cases.
To run using your own model, simply replace the values previously returned by the get_model
function with your own model, class list and preprocessing function (optional):
Currently we support image classification and segmentation models only. We expect the model to take a batch of images as input, and return a batch of logits (NOT probabilities).
For most models this will work out of the box, but if your model returns something else (e.g. a dictionary, or probabilities) you might have to edit it, or add a wrapper before passing it to engine.generate
:
The generate function returns a Pandas DataFrame and a Dictionary of Numpy arrays. If you're in a Jupyter notebook, you can view these DataFrame inline using engine.display_df(df_results)
, but for the best experience we recommend you head to the leap app, or log directly to your weights and biases dashboard.
For more information about the data we return, see Concepts. If used with samples (see Sample Feature Isolation), the DataFrame contains feature isolations for each sample, for the target classes (if provided), or for the top 3 predicted classes.
We support both PyTorch and Tensorflow. Specify your package with the mode
parameter, using "tf"
for Tensorflow and "pt"
for PyTorch.
If using PyTorch, we expect the model to take images to be in channels first format, e.g. of shape [1, channels, height, width]
.
If Tensorflow we expect channels last, e.g.[1, height, width, channels]
.