In recent times, due to the effects of the pandemic, a mask detecting app is in great need. Such an app can go a long way in helping to contain the spread of the disease. How so?
The pandemic has brought to light the necessary steps one can take to help prevent the spread of the virus. One small yet highly effective measure is wearing masks. For the most part, many people do wear masks. However, it is not uncommon for people to wear masks in the wrong way.
The mask-detector app strives to help identify those individuals who fail to follow the health guidelines.
The app will be able to identify if the users have worn a mask and if they have worn it the right way. They will need to capture a picture from their camera, and the app will do the rest.
In this blog, we will take you through the various steps that went into building this app using Flutter and TensorFlow Lite. Let us start with understanding how we got the model to identify faces with or without masks.
Training the Model
For training the model to identify faces with or without masks, we made use of Google’s teachable machine, which simplifies the entire process of model training and data classification.
We defined an image project and defined 3 classes of data to indicate the below 3 different labels as:
- You are Not Wearing Mask
- Good Job! Your mask is on!
- You have not worn your mask properly!
Next we defined the data using the webcam, captured data for the three categories and trained the model. When satisfied with the model performance, we exported the tflite format of the model which gives a zip file with the following 2 files:
- labels.txt – This file includes the class labels for which the model is trained to recognise and categorize.
- Model_unquant.tflite – The model file in tflite format.
This completes the training process and we are ready to consume this in our flutter application.
Creating a flutter application
Create a new flutter application named mask_detector using below command
1 |
flutter create mask_detector |
This will create our application folder structure.
Adding the dependencies and assets
The dependencies will be added to the pubspec.yaml as belows
1 2 |
image_picker: ^0.6.7+11 tflite: ^1.1.1 |
Also add an assets folder in the application top level. Add the training model related files generated earlier in the assets folder. Also add any app level images into the assets folder.
Add the assets section in the pubspec.yaml
1 2 |
assets: - assets/ |
Defining the App View
Now define the main.dart to load the Home page which will include the application view. If needed, you can also add a splash screen using the SplashScreen dependency.
The Home component will be the main component and will enable the user to pick an image from the gallery or capture a new image from the camera.
The initState has the logic to load the model.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
@override void initState() { super.initState(); _loading = true; loadModel().then((value) { // the model is ready }); } // The loadmodel loads the model from assets folder using the TFlite plugin loadModel() async { await Tflite.loadModel( model: 'assets/model_unquant.tflite', labels: 'assets/labels.txt', ); } |
The view is defined to show the default image on the app loading. Later as the user selects an image from the gallery/camera, the picked image is shown and classified using the model.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
@override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('Mask or No Mask'), backgroundColor: Colors.deepOrange, ), backgroundColor: Colors.black12, body: Container( padding: EdgeInsets.symmetric(horizontal: 20), child: Column( crossAxisAlignment: CrossAxisAlignment.start, children: <Widget>[ SizedBox( height: 10, ), Text( 'Are you wearing a mask properly?', style: TextStyle( color: Colors.blue, fontSize: 28), ), SizedBox( height: 50, ), Center( child: _loading ? Container( width: 300, child: Column( children: <Widget>[ Image.asset('assets/masks.png'), SizedBox(height: 50), ], ), ) : Container( child: Column( children: <Widget>[ Container( height: 250, child: Image.file(_image), ), SizedBox( height: 20, ), _output != null ? Container( padding: EdgeInsets.symmetric(vertical: 10), child: Text('${_output[0]['label']}', style: TextStyle( color: Colors.white, fontSize: 20.0)), ) : Container(), ], )), ), Container( width: MediaQuery.of(context).size.width, child: Column( children: <Widget>[ GestureDetector( onTap: pickImage, child: Container( width: MediaQuery.of(context).size.width - 260, alignment: Alignment.center, decoration: BoxDecoration( color: Color(0xFFE99600), borderRadius: BorderRadius.circular(6), ), child: Text( 'Take a photo', style: TextStyle(color: Colors.white), ), ), ), SizedBox(height: 10), GestureDetector( onTap: pickGalleryImage, child: Container( width: MediaQuery.of(context).size.width - 260, alignment: Alignment.center, decoration: BoxDecoration( color: Colors.orange, borderRadius: BorderRadius.circular(6), ), child: Text( 'Pick from Gallery', style: TextStyle(color: Colors.white), ), ), ), SizedBox( height: 100, ), Text( 'Model powered by Teachablemachine CNN', style: TextStyle( color: Colors.deepOrange, fontSize: 15, ), ), ], ), ), ], ), ), ); } } |
On picking an image from the gallery or capturing from camera, the image is tested against the model as below
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
//Handle the camera image pickImage() async { var image = await picker.getImage(source: ImageSource.camera); if (image == null) return null; setState(() { _image = File(image.path); }); classifyImage(_image); } //Handle the gallery image pickGalleryImage() async { var image = await picker.getImage(source: ImageSource.gallery); if (image == null) return null; setState(() { _image = File(image.path); }); classifyImage(_image); } //Classify the image using the model classifyImage(File image) async { var output = await Tflite.runModelOnImage( path: image.path, numResults: 2, threshold: 0.5, imageMean: 127.5, imageStd: 127.5); setState(() { _loading = false; _output = output; }); } |
The {_output[0][‘label’]} shows the final result returned by the model after image classification.
We also need to dispose the model appropriately as below.
1 2 3 4 5 |
@override void dispose() { Tflite.close(); super.dispose(); } |
Now the app is ready to test.
Try testing out if you are wearing your mask properly.
You can find the complete code here.
CONCLUSION
This is an apt use case in the current times when wearing a mask has become the new norm. An app to identify if a mask is worn properly or not. This is a basic model which detects only a single face in the frame, but this can be enhanced and trained to identify and flag multiple faces with or without proper masks in a given frame.
REFERENCES
https://www.udemy.com/course/flutter-deeplearning-course/
https://teachablemachine.withgoogle.com/train/image