After you train your own model using AutoML Vision Edge, you can use it in your app to label images.
Before you begin
- If you have not already added Firebase to your app, do so by following the steps in the getting started guide.
- Include the ML Kit libraries in your Podfile:
After you install or update your project's Pods, be sure to open your Xcode project using itspod 'Firebase/MLVision', '6.25.0' pod 'Firebase/MLVisionAutoML', '6.25.0'
.xcworkspace
. - In your app, import Firebase:
Swift
importFirebase
Objective-C
@importFirebase;
1. Load the model
ML Kit runs your AutoML-generated models on the device. However, you can configure ML Kit to load your model either remotely from Firebase, from local storage, or both.
By hosting the model on Firebase, you can update the model without releasing a new app version, and you can use Remote Config and A/B Testing to dynamically serve different models to different sets of users.
If you choose to only provide the model by hosting it with Firebase, and not bundle it with your app, you can reduce the initial download size of your app. Keep in mind, though, that if the model is not bundled with your app, any model-related functionality will not be available until your app downloads the model for the first time.
By bundling your model with your app, you can ensure your app's ML features still work when the Firebase-hosted model isn't available.
Configure a Firebase-hosted model source
To use the remotely-hosted model, create an AutoMLRemoteModel
object, specifying the name you assigned the model when you published it:
Swift
letremoteModel=AutoMLRemoteModel(name:"your_remote_model"// The name you assigned in the Firebase console.)
Objective-C
FIRAutoMLRemoteModel*remoteModel=[[FIRAutoMLRemoteModelalloc]initWithName:@"your_remote_model"];// The name you assigned in the Firebase console.
Then, start the model download task, specifying the conditions under which you want to allow downloading. If the model isn't on the device, or if a newer version of the model is available, the task will asynchronously download the model from Firebase:
Swift
letdownloadConditions=ModelDownloadConditions(allowsCellularAccess:true,allowsBackgroundDownloading:true)letdownloadProgress=ModelManager.modelManager().download(remoteModel,conditions:downloadConditions)
Objective-C
FIRModelDownloadConditions*downloadConditions=[[FIRModelDownloadConditionsalloc]initWithAllowsCellularAccess:YESallowsBackgroundDownloading:YES];NSProgress*downloadProgress=[[FIRModelManagermodelManager]downloadRemoteModel:remoteModelconditions:downloadConditions];
Many apps start the download task in their initialization code, but you can do so at any point before you need to use the model.
Configure a local model source
To bundle the model with your app:
- Extract the model and its metadata from the zip archive you downloaded from Firebase console into a folder:
All three files must be in the same folder. We recommend you use the files as you downloaded them, without modification (including the file names).your_model_directory |____dict.txt |____manifest.json |____model.tflite
- Copy the folder to your Xcode project, taking care to select Create folder references when you do so. The model file and metadata will be included in the app bundle and available to ML Kit.
- Create an
AutoMLLocalModel
object, specifying the path to the model manifest file:Swift
guardletmanifestPath=Bundle.main.path(forResource:"manifest",ofType:"json",inDirectory:"your_model_directory")else{returntrue}letlocalModel=AutoMLLocalModel(manifestPath:manifestPath)
Objective-C
NSString*manifestPath=[NSBundle.mainBundlepathForResource:@"manifest"ofType:@"json"inDirectory:@"your_model_directory"];FIRAutoMLLocalModel*localModel=[[FIRAutoMLLocalModelalloc]initWithManifestPath:manifestPath];
Create an image labeler from your model
After you configure your model sources, create a VisionImageLabeler
object from one of them.
If you only have a locally-bundled model, just create a labeler from your AutoMLLocalModel
object and configure the confidence score threshold you want to require (see Evaluate your model):
Swift
letoptions=VisionOnDeviceAutoMLImageLabelerOptions(localModel:localModel)options.confidenceThreshold=0// Evaluate your model in the Firebase console// to determine an appropriate value.letlabeler=Vision.vision().onDeviceAutoMLImageLabeler(options:options)
Objective-C
FIRVisionOnDeviceAutoMLImageLabelerOptions*options=[[FIRVisionOnDeviceAutoMLImageLabelerOptionsalloc]initWithLocalModel:localModel];options.confidenceThreshold=0;// Evaluate your model in the Firebase console// to determine an appropriate value.FIRVisionImageLabeler*labeler=[[FIRVisionvision]onDeviceAutoMLImageLabelerWithOptions:options];
If you have a remotely-hosted model, you will have to check that it has been downloaded before you run it. You can check the status of the model download task using the model manager's isModelDownloaded(remoteModel:)
method.
Although you only have to confirm this before running the labeler, if you have both a remotely-hosted model and a locally-bundled model, it might make sense to perform this check when instantiating the VisionImageLabeler
: create a labeler from the remote model if it's been downloaded, and from the local model otherwise.
Swift
varoptions:VisionOnDeviceAutoMLImageLabelerOptions?if(ModelManager.modelManager().isModelDownloaded(remoteModel)){options=VisionOnDeviceAutoMLImageLabelerOptions(remoteModel:remoteModel)}else{options=VisionOnDeviceAutoMLImageLabelerOptions(localModel:localModel)}options.confidenceThreshold=0// Evaluate your model in the Firebase console// to determine an appropriate value.letlabeler=Vision.vision().onDeviceAutoMLImageLabeler(options:options)
Objective-C
VisionOnDeviceAutoMLImageLabelerOptions*options;if([[FIRModelManagermodelManager]isModelDownloaded:remoteModel]){options=[[FIRVisionOnDeviceAutoMLImageLabelerOptionsalloc]initWithRemoteModel:remoteModel];}else{options=[[FIRVisionOnDeviceAutoMLImageLabelerOptionsalloc]initWithLocalModel:localModel];}options.confidenceThreshold=0.0f;// Evaluate your model in the Firebase console// to determine an appropriate value.FIRVisionImageLabeler*labeler=[[FIRVisionvision]onDeviceAutoMLImageLabelerWithOptions:options];
If you only have a remotely-hosted model, you should disable model-related functionality—for example, gray-out or hide part of your UI—until you confirm the model has been downloaded.
You can get the model download status by attaching observers to the default Notification Center. Be sure to use a weak reference to self
in the observer block, since downloads can take some time, and the originating object can be freed by the time the download finishes. For example:
Swift
NotificationCenter.default.addObserver(forName:.firebaseMLModelDownloadDidSucceed,object:nil,queue:nil){[weakself]notificationinguardletstrongSelf=self,letuserInfo=notification.userInfo,letmodel=userInfo[ModelDownloadUserInfoKey.remoteModel.rawValue]as?RemoteModel,model.name=="your_remote_model"else{return}// The model was downloaded and is available on the device}NotificationCenter.default.addObserver(forName:.firebaseMLModelDownloadDidFail,object:nil,queue:nil){[weakself]notificationinguardletstrongSelf=self,letuserInfo=notification.userInfo,letmodel=userInfo[ModelDownloadUserInfoKey.remoteModel.rawValue]as?RemoteModelelse{return}leterror=userInfo[ModelDownloadUserInfoKey.error.rawValue]// ...}
Objective-C
__weaktypeof(self)weakSelf=self;[NSNotificationCenter.defaultCenteraddObserverForName:FIRModelDownloadDidSucceedNotificationobject:nilqueue:nilusingBlock:^(NSNotification*_Nonnullnote){if(weakSelf==nil|note.userInfo==nil){return;}__strongtypeof(self)strongSelf=weakSelf;FIRRemoteModel*model=note.userInfo[FIRModelDownloadUserInfoKeyRemoteModel];if([model.nameisEqualToString:@"your_remote_model"]){// The model was downloaded and is available on the device}}];[NSNotificationCenter.defaultCenteraddObserverForName:FIRModelDownloadDidFailNotificationobject:nilqueue:nilusingBlock:^(NSNotification*_Nonnullnote){if(weakSelf==nil|note.userInfo==nil){return;}__strongtypeof(self)strongSelf=weakSelf;NSError*error=note.userInfo[FIRModelDownloadUserInfoKeyError];}];
2. Prepare the input image
Then, for each image you want to label, create a VisionImage
object using one of the options described in this section and pass it to an instance of VisionImageLabeler
(described in the next section).
Create a VisionImage
object using a UIImage
or a CMSampleBufferRef
.
To use a UIImage
:
- If necessary, rotate the image so that its
imageOrientation
property is.up
. - Create a
VisionImage
object using the correctly-rotatedUIImage
. Do not specify any rotation metadata—the default value,.topLeft
, must be used.Swift
letimage=VisionImage(image:uiImage)
Objective-C
FIRVisionImage*image=[[FIRVisionImagealloc]initWithImage:uiImage];
To use a CMSampleBufferRef
:
Create a
VisionImageMetadata
object that specifies the orientation of the image data contained in theCMSampleBufferRef
buffer.To get the image orientation:
Swift
funcimageOrientation(deviceOrientation:UIDeviceOrientation,cameraPosition:AVCaptureDevice.Position)->VisionDetectorImageOrientation{switchdeviceOrientation{case.portrait:returncameraPosition==.front?.leftTop:.rightTopcase.landscapeLeft:returncameraPosition==.front?.bottomLeft:.topLeftcase.portraitUpsideDown:returncameraPosition==.front?.rightBottom:.leftBottomcase.landscapeRight:returncameraPosition==.front?.topRight:.bottomRightcase.faceDown,.faceUp,.unknown:return.leftTop}}
Objective-C
-(FIRVisionDetectorImageOrientation)imageOrientationFromDeviceOrientation:(UIDeviceOrientation)deviceOrientationcameraPosition:(AVCaptureDevicePosition)cameraPosition{switch(deviceOrientation){caseUIDeviceOrientationPortrait:if(cameraPosition==AVCaptureDevicePositionFront){returnFIRVisionDetectorImageOrientationLeftTop;}else{returnFIRVisionDetectorImageOrientationRightTop;}caseUIDeviceOrientationLandscapeLeft:if(cameraPosition==AVCaptureDevicePositionFront){returnFIRVisionDetectorImageOrientationBottomLeft;}else{returnFIRVisionDetectorImageOrientationTopLeft;}caseUIDeviceOrientationPortraitUpsideDown:if(cameraPosition==AVCaptureDevicePositionFront){returnFIRVisionDetectorImageOrientationRightBottom;}else{returnFIRVisionDetectorImageOrientationLeftBottom;}caseUIDeviceOrientationLandscapeRight:if(cameraPosition==AVCaptureDevicePositionFront){returnFIRVisionDetectorImageOrientationTopRight;}else{returnFIRVisionDetectorImageOrientationBottomRight;}default:returnFIRVisionDetectorImageOrientationTopLeft;}}
Then, create the metadata object:
Swift
letcameraPosition=AVCaptureDevice.Position.back// Set to the capture device you used.letmetadata=VisionImageMetadata()metadata.orientation=imageOrientation(deviceOrientation:UIDevice.current.orientation,cameraPosition:cameraPosition)
Objective-C
FIRVisionImageMetadata*metadata=[[FIRVisionImageMetadataalloc]init];AVCaptureDevicePositioncameraPosition=AVCaptureDevicePositionBack;// Set to the capture device you used.metadata.orientation=[selfimageOrientationFromDeviceOrientation:UIDevice.currentDevice.orientationcameraPosition:cameraPosition];
- Create a
VisionImage
object using theCMSampleBufferRef
object and the rotation metadata:Swift
letimage=VisionImage(buffer:sampleBuffer)image.metadata=metadata
Objective-C
FIRVisionImage*image=[[FIRVisionImagealloc]initWithBuffer:sampleBuffer];image.metadata=metadata;
3. Run the image labeler
To label objects in an image, pass the VisionImage
object to the VisionImageLabeler
's process()
method:
Swift
labeler.process(image){labels,erroringuarderror==nil,letlabels=labelselse{return}// Task succeeded.// ...}
Objective-C
[labelerprocessImage:imagecompletion:^(NSArray<FIRVisionImageLabel*>*_Nullablelabels,NSError*_Nullableerror){if(error!=nil||labels==nil){return;}// Task succeeded.// ...}];
If image labeling succeeds, an array of VisionImageLabel
objects will be passed to the completion handler. From each object, you can get information about a feature recognized in the image.
For example:
Swift
forlabelinlabels{letlabelText=label.textletconfidence=label.confidence}
Objective-C
for(FIRVisionImageLabel*labelinlabels){NSString*labelText=label.text;NSNumber*confidence=label.confidence;}
Tips to improve real-time performance
- Throttle calls to the detector. If a new video frame becomes available while the detector is running, drop the frame.
- If you are using the output of the detector to overlay graphics on the input image, first get the result from ML Kit, then render the image and overlay in a single step. By doing so, you render to the display surface only once for each input frame. See the previewOverlayView and FIRDetectionOverlayView classes in the showcase sample app for an example.