[To save you some heartache, read this article first if somehow you ended up here first!]
So after banging my head against the wall for two weeks, I went back to my smart person at Google with my frustrations. He advised me to follow the O’Reilly article, and doing the training process the manual way: hand-labeling all of my objects in my images, converting those labels into TFRecord Format, then retraining the model and implementing it. Except…
- Installing LabelImg is a B&^% if you are on a Mac. Or linux. Or anything but maybe Windows (I run away from Windows machines for a living, so I do not know how it goes but I am assuming smoother than it went for me).
- Once LabelImg was installed, I went through and labeled all of my images. You know, the ones with multiple classes of objects in each image [please, keep this in mind once again…].
- Protoc and protobuf will give you problems when installing the Object detection API.
- All of the above took me a whole weekend to figure out…
At this point of the weekend, bowls of Cheerios later and having not left my house for three days, I was fed up and looked elsewhere for help. I found this awesome tutorial series on Medium, and started following it. Very, very similar steps…
*note: At this point I will pause before moving onto the next part in this series. Please bookmark this tutorial. I 100% plan on going back through and repeating my steps with my data correct this time [stay tuned for why on this]. The steps are excellent, minus a few things if you are a newbie like I was trying to find the light in the long dark tunnel of transfer learning.
So what was the problem? I went through all of the steps, trained my model… and froze at Step 5. How to test the model?
The problem I soon ran into (as a complete newbie) was how to test the model. The author of the medium series used a video as his test data, and his Python code did not seem to work for me in trying to feed my individual test images I had set aside.
So, I ditched the “manual way” of doing my project, and went back to different smart people at Google for more assistance.
Before I move on, I will pause to address two things: 1. Having smart people around to help you definitely helps. It can speed up parts of your efforts that seem to have you in a deep dark corner gnawing on Funions with no end in sight. Giving up is a great option at this point. Could I have worked myself out of my hard place without them? Probably. But it would have taken longer than my now about 4 weeks of off and on working on this project “in my free time”.
2. At this point I felt like a massive failure. If you are anything like me, failure is not acceptable (for the most part) or, if it occurs, it leaves you deeply scarred after you finally recover from the fact that you are human and trying to do something really hard, like teach yourself machine learning. What I quickly found out is I had not started at the beginning, but somewhere in the middle. I had no idea what I was doing.
So where did I end up? See this next post for the end… of the beginning of my journey with machine learning!