Dim Sum AI

One of my pet peeves is the use of "AI" to label what is actually just the latest trend in AI, which is more specifically known as machine learning, and even more specifically known as deep learning, based on neural networks.

Admittedly, it predominates and is pretty effective, but there was a whole history of AI before this (and there is work on hybrid AI, combining neural nets and symbolic AI). I don't even remember if my undergrad AI textbook mentioning neural networks, just the latest version of Perceptrons debunking them again (the second edition basically just said nothing's changed since the first version).

For example, the AI Weirdness book does state that by AI they really mean machine learning or deep learning and that the other method is rule-based programming, so I said oh good, we haven't forgotten rule-based systems, but then they cited HTML as an example. Sigh.

But I shouldn't throw stones. I added a "Dim Sum AI" feature to my Talk Dim Sum app, because, you know, marketing. And I figured no one would know what "Dim Sum ML" means (although "Dim Sum Deep Learning" has a ring to it). Nevertheless, the Apple framework supporting this feature, which allows my loyal users to take photos of their dim sum and hopefully get a good guess what dumpling they're looking at, is called Core ML, so I can't plead ignorance.

Core ML is pretty easy to use, especially if you're doing the minimum, which is what I do. First create an ML project using the Create ML tool, which you can launch from Xcode.

Create a model source for the project.

And in the model source, click on Training Data to add the training data.

This is conventionally a folder named train with all the training images partitioned among folders as you want them classified, ideally with about the same number of images in each folder. Core ML expects a sibling test in which you place images to test your new classifier, but I don't bother with that.

Hit the Train button once the training data is loaded.

When it's done, I go straight to Output and click on Get to download the classifier and add it into the Xcode project that will use it.

And then whenever I need to update with new training images, I just repeat the process, without having to change the code, which is a combination of Core ML and Core Vision (start with this image classification code sample).

Subscribe to Technicat

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.