lime全稱為Local Interpretable Model-Agnostic Explanations , lime算法是Marco Tulio Ribeiro2016年發表的論文《“Why Should I Trust You?” Explaining the Predictions of Any Classifier》中介紹的局部可解釋性模型算法。該算法主要是用在文本類與圖像類的模型中。
文本解釋器構造方法:
import lime from lime import lime_text from lime.lime_text import LimeTextExplainer # create explainer explainer = LimeTextExplainer() # generate explain_instance exp = explainer.explain_instance(raw_text_instance, predict_fn) # list form exp.as_list() # plot form # % matplotlib inline fig = exp.as_pyplot_figure() exp.show_in_notebook(text=False) # save explain as html exp.save_to_file('/tmp/oi.html') exp.show_in_notebook(text=True)
example:http://marcotcr.github.io/lime/tutorials/Lime%20-%20basic%20usage%2C%20two%20class%20case.html
圖像解釋器構造:
from lime import lime_image explainer = lime_image.LimeImageExplainer() explaination = explainer.explain_instance(image=x, classifier_fn=predict, segmentation_fn=segmentation) img, msk = explaination.get_image_and_mask(explaination.top_labels[0], negative_only=False, positive_only=False,hide_rest=False, num_features=10, min_weight=0.05)
2個大坑:
1:explain_instance()函數的image參數必須為numpy array 類型,格式為[高 , 寬 , 通道] , 傳入image參數前必須經過轉換
numpy_image = tensor_image.permute(1, 2, 0).numpy().astype(np.double)
2.model需要封裝,將傳入的numpy array轉為tensor類型 , 格式改回[批量 , 通道 , 高 , 寬] , 進行預測前先調.eval()函數
def predict(input): model.eval() input = torch.from_numpy(input) input = torch.as_tensor(input, dtype=torch.float32) input = input.permute(0, 3, 1, 2) output = model(input) return output.detach().numpy()