Training A Model in Less Than A Day: A Beginner’s Guide to Google Cloud Platform AutoML Vision
Today, I trained a model in one day. With a little help from my friends on the labeling piece… but yes, it is a reality.
With Google Cloud Platform’s new AutoML line of products, anything seems possible even if you do not have a background in machine learning.
In this case, I used AutoML Vision to take on the Airbus Ship Kaggle challenge data, mostly for work but also out of curiosity to see if a novice really good shoot for playing in the Kaggle playground.
So, I don’t know what we are waiting for…
- Navigate to Vision in your Google Cloud Project Home page, and hit “Set Up.” This will turn on all of the required APIs that you need for this project.
2. Download the data. This can take a minute, so maybe go get a workout in or read a book… If you have your own data set already uploaded, then awesome. If not and you need or want data to play with, go to Kaggle’s Airbus Ship Detection competition page and download the data here.
3. There are multiple different ways to upload data to an AutoML Vision project. When you create a new AutoML data source, a Google Cloud Storage bucket will be created in association with the project. If you do not already have your data in a GCS bucket, then you are good to directly upload those images and folders to the new bucket that will be created (ending with “vcm”). In my case, the data was already sitting in another bucket in a buddy’s project, so I had to complete a GCS transfer job. Follow along below either way!
Click to the “Datasets” tab in your AutoML project after you have activated all of the necessary APIs and you land at the “home page” of AutoML. In the upper left hand corner, click “ New Dataset.” It will bring you to the screen above.
4. In my case, I had to transfer data from another bucket. AutoML can import images in three different ways: uploading them from your local computer, selecting a csv file, or choose to import images later. If you just downloaded the data from Kaggle or have your images sitting locally, skip to step 8.
In my case, I had to do two things. First, I had to complete a GCS Storage Transfer job to import my photos from one GCS bucket to another. Then, I had to create a csv (from an old one) and used a python script to alter the cloud storage file path. I will cover both of these steps- access the sample csv and the python script I used on my Github page here.
5. Let’s navigate back to our Google Cloud Storage section in our console for a second. Notice that after an AutoML project has been opened, a new bucket is created. In my case it is titled “ships-automl-vision-vcm”. The other bucket in this screenshot is the bucket that contains all of the ships data (“ships-automl-vision”).
6. To create a transfer job that will move all of my data from one bucket to another, click on “Transfer” under the “Browser” menu option on the left hand side of the GCS console view above. You will get to the page below. Click on “Create Transfer”.
You can create Google Cloud Storage transfer jobs from other data sources outside of a Google Cloud Storage bucket. As you can see below, the options include Amazon S3 buckets, or even fetching data from a URL.
In this case, I selected my original bucket with my data as the source, and the new “vcm” bucket as the destination.
Select “Run now”, and hit “Create”.
7. Now you will need to create a csv file that sits in the new “vcm” GCS bucket. This csv file simply contains one column with the gs:// file path equivalent that points to all of the images you wish to import. See below for an example of mine:
To create your own you can do this manually, or simply pull down my example copy from my Github here, and run the following simple python script to find and replace my file path with the name of your GCS bucket. If you named it the same, then you can actually simply re-use this csv!
import re# open your csv and read as a text string
with open('train.csv', 'r') as f: #remember to replace filepath here!
my_csv_text = f.read()find_str = 'airbus_ship_detect' #replace with your original bucket name
replace_str = 'ships-automl-vision-vcm' #replace with the destination bucket name that AutoML created# substitute
new_csv_str = re.sub(find_str, replace_str, my_csv_text)# open new file and save
new_csv_path = './train-new.csv' # or whatever path and name you want
with open(new_csv_path, 'w') as f:
For those who need reminding, remember to replace my filepath above with where you saved the csv (see commented line) as well as replace the names of the GCS buckets with whatever you named your storage buckets.
The photos will take a moment to load, depending on how many images you have. With the Kaggle Airbus Ships dataset, it took about 2 minutes.
8. Now, you should see a page that looks like this, with all of the photos you just uploaded:
Click “Add Label” to start entering in the labels you will use for your data. In the case of the Airbus Ships dataset, the problem is a simple “Ship/No Ship” question. The labeling task included labeling images with ships as such, and images in the dataset that included no ships as “No Ship”. To label an image, either select the image to enlarge it and add a label on the right hand side pane, or you can select multiple images at a time if you are confident in the thumbnail tiles to add a label to:
If you are working with the Airbus Ships dataset, you are probably concerned about labeling all 104,000 train images that are included in the train folder. “But you said this would be done in one day!”… Yes, I did! What my roommate and I managed to do (well, mostly her) was test and see how many images needed to be labeled to get above 90% accuracy. Good news is 1500 images is about where you start fluctuating between 94%-95% accuracy, and I am sure that you would reach something closer to 98% accuracy if you kept labeling images.
So, images have been labeled. Now we need to train the model.
Training a model on AutoML Vision is as simple as clicking a button, but you have a few options as you can see below:
Click the “Train” tab next to the Images tab that you were just working with to label your images. Your screen will not look like mine above (since I have already trained models) but you will still see the popup window.
You have a chance to name your model- keep in mind that this is important for versioning if you plan on experimenting with training.
Then you can select from a small group of training options. You can train your model for one hour for free, then bump up to 24 hours. Look at the AutoML Vision Pricing here for an idea of how much this costs- if you have a larger dataset where accuracy is extremely important to you (eg. If you are building models for IRL production) I would recommend this option.
When you choose the option that works best for you (in the case of the Airbus ships dataset, I selected the free option) hit “Start Training”.
Once training is completed, here is the dashboard you can see once training is complete. The quick view shows average precision and recall for your model.
AutoML Vision gives you a full evaluation of your model beyond what you see above. Click on “See Full Evaluation”:
Notice that you can toggle the score threshold and further explore how to optimize your model.
If these metrics do not mean much to you, or you are just getting started in machine learning, consult with this beginner’s guide for AutoML. While some of the definitions of terms here seemed okay, for a beginner you might need more clarification on topics such as precision vs. recall. Go explore what these mean in the real world, as well as exploring topics like machine learning fairness, which is mentioned several times in the guide.
9. Now it is time to test your model. What AutoML does under the hood that the user does not realize during the training process is it automatically segments/ sets aside a portion of the data to perform tests as well. If you want to manually segment your data from training to testing datasets, you can do this in AutoML as well. For our purposes today, we will upload a couple of photos from the Airbus Ships dataset (under the “test” folder from the data you downloaded from the Kaggle website) to see how our model performed.
If you scroll down, you get more than a confidence rating. Scroll down to see how you can programmatically call the new model that you just trained to test on many images if uploading images manually is not something you are interested in:
So… there you go! A model trained in one day with Google’s Vision API as the basis for your custom model.