We’re very excited to present MLPaint, the Real-Time Handwritten Digit Recognizer, a web app that runs on MLDB. Check it out in the demo below:
We created two demo notebooks that go over the image processing and machine learning required to put this together. The first, mentionned above, explores the machine learning that was required to put together MLPaint, while the first talks about an image processing technique that was required for the plugin, convolutions.
The plugin is hosted on Github if you want to check out the implementation.
We can use Machine Learning to recognize the meaning of images. By training our models on the MNIST dataset, we can recognize digits written by hand. As we show in the Recognizing Handwritten Digits demo, we can also explain our model’s decision process. Indeed, we can introspect every MLDB model to understand how each feature is driving the predictions. In this case, this translates into showing us what each pixel contributed and how. This is a visual representation of why our models thinks that the digit below is a ‘8’ and not a ‘3’:
Being able to understand why models behave a certain way is one of the guiding design principles of MLDB. MLPaint is a great example of what white-box machine learning can be.
Convolutions are frequently used in Computer Vision. Particularly, to manipulate images and detect edges. Here are a few examples of different kernels we show how to apply in this demo:
Those types of operations can be quite compute-intensive operation. Because MLDB uses a suite of optimizations to speed up data processing, convolutions can be quick, especially when using Tensorflow operators.
Check our Convolutions demo to perform your own Convolutions!