在xcode中使用mlmodel模型,之前說的最簡單的方法是將模型拖進工程中即可,xcode會自動生成有關模型的前向預測接口,這種方式非常簡單,但是更新模型就很不方便。
今天說下另外一種通過URL加載mlmodel的方式。具體可以查閱apple開發者官方文檔 https://developer.apple.com/documentation/coreml/mlmodel:
流程如下:
1.提供mlmodel的文件所在路徑model_path
NSString *model_path = "path_to/.mlmodel"
2.將NSSting類型轉換為NSURL,並根據路徑對模型進行編譯(編譯出的為.mlmodelc 文件, 這是一個臨時文件,如果需要,可以將其保存到一個固定位置:https://developer.apple.com/documentation/coreml/core_ml_api/downloading_and_compiling_a_model_on_the_user_s_device)
NSURL *url = [NSURL fileURLWithPath:model_path isDirectory:FALSE];
NSURL *compile_url = [MLModel compileModelAtURL:url error:&error];
3.根據編譯后模型所在路徑,加載模型,類型為MLModel
MLModel *compiled_model = [MLModel modelWithContentsOfURL:compile_url configuration:model_config error:&error];
4.需要注意的是采用動態編譯方式,coreml只是提供了一種代理方式MLFeatureProvider,類似於C++中的虛函數。因此需要自己重寫模型輸入和獲取模型輸出的類接口(該類繼承自MLFeatureProvider)。如下自己封裝的MLModelInput和MLModelOutput類。MLModelInput類可以根據模型的輸入名稱InputName,傳遞data給模型。而MLModelOutput可以根據不同的輸出名稱featureName獲取預測結果。
這個是頭文件:
#import <Foundation/Foundation.h> #import <CoreML/CoreML.h> NS_ASSUME_NONNULL_BEGIN /// Model Prediction Input Type API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) @interface MLModelInput : NSObject<MLFeatureProvider> //the input name,default is image @property (nonatomic, strong) NSString *inputName; //data as color (kCVPixelFormatType_32BGRA) image buffer @property (readwrite, nonatomic) CVPixelBufferRef data; - (instancetype)init NS_UNAVAILABLE; - (instancetype)initWithData:(CVPixelBufferRef)data inputName:(NSString *)inputName; @end API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) @interface MLModelOutput : NSObject<MLFeatureProvider> //the output name, defalut is feature @property (nonatomic, strong) NSString *outputName; // feature as multidimensional array of doubles @property (readwrite, nonatomic) MLMultiArray *feature; - (instancetype)init NS_UNAVAILABLE; - (instancetype)initWithFeature:(MLMultiArray *)feature; @end NS_ASSUME_NONNULL_END
這個是類方法實現的文件:
@implementation MLModelInput - (instancetype)initWithData:(CVPixelBufferRef)data inputName:(nonnull NSString *)inputName { if (self) { _data = data; _inputName = inputName; } return self; } - (NSSet<NSString *> *)featureNames { return [NSSet setWithArray:@[self.inputName]]; } - (nullable MLFeatureValue *)featureValueForName:(nonnull NSString *)featureName { if ([featureName isEqualToString:self.inputName]) { return [MLFeatureValue featureValueWithPixelBuffer:_data]; } return nil; } @end @implementation MLModelOutput - (instancetype)initWithFeature:(MLMultiArray *)feature{ if (self) { _feature = feature; _outputName = DefalutOutputValueName; } return self; } - (NSSet<NSString *> *)featureNames{ return [NSSet setWithArray:@[self.outputName]]; } - (nullable MLFeatureValue *)featureValueForName:(nonnull NSString *)featureName { if ([featureName isEqualToString:self.outputName]) { return [MLFeatureValue featureValueWithMultiArray:_feature]; } return nil; } @end
5. 模型預測,獲取預測結果。上面這兩個類接口寫完后,就可以整理輸入數據為CvPixelBuffer,然后通過獲取模型描述MLModelDescription得到輸入名稱,根據輸入名稱創建MLModelInput,預測,然后再根據MLModelOutput中的featureNames獲取對應的預測輸出數據,類型為MLMultiArray:
MLModelDescription *model_description = compiled_model.modelDescription; NSDictionary *dict = model_description.inputDescriptionsByName;
NSArray<NSString *> *feature_names = [dict allKeys]; NSString *input_feature_name = feature_names[0]; NSError *error; MLModelInput *model_input = [[MLModelInput alloc] initWithData:buffer inputName:input_feature_name];
id<MLFeatureProvider> model_output = [compiled_model predictionFromFeatures:model_input options:option error:&error];
NSSet<NSString *> *out_feature_names = [model_output featureNames]; NSArray<NSString *> *name_list = [out_feature_names allObjects]; NSUInteger size = [name_list count]; std::vector<MLMultiArray *> feature_list; for (NSUInteger i = 0; i < size; i++) { NSString *name = [name_list objectAtIndex:i]; MLMultiArray *feature = [model_output featureValueForName:name].multiArrayValue; feature_list.push_back(feature);
}
6.讀取MLMultiArray中的預測結果數據做后續處理..