Overfitting and Undercoding

Today I continued my adventures in Neural Networks. I’m working towards a system that uses “Artificial Intelligence” to detect features from pictures of plants in a greenhouse (such as leaves, branches, fruits, flowers). The final idea is to use this system to control a robot that would automate part of the work in a greenhouse.

Now the idea is to start from the code base of a student who used DL image segmentation techniques to detect cracks in concrete. The student got good results, but the final code (or at least the version I got my hands on) was honestly a bit of a mess. Data reading, data wrangling, model building, training, testing, everything was put together in a long python script, in such a way that was really hard both to understand what was going on, and to build upon it for new projects (at least on a robust manner).

So I spend most of the day re-writing the code into multiple scripts and modules. I must say that at the very least, this gave me a good understanding of what was going on (and a clear view of the parts that were “magicked” in). I started to think about how I could impress into new students the need of organizing one’s code well (specially since the biggest beneficiaries are the future you’s).

My new problem is that training the neural network on the plant data is “over fitting towards nothingness”. In other words, the NN got wise to the fact that the easiest way to minimize it’s loss function is to return blank segmentation results (which are not useful). I probably need to find a better loss function, but that is a problem for tomorrow.

The good thing of moving from the design to the training stage is that I can use the training periods to catch up on the books that I wanted to read during the break! :-P

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.