Announcement

Collapse
No announcement yet.

Kids training their own TensorFlow model?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Kids training their own TensorFlow model?

    Summer project:

    The kids want to try using Google's Teachable Machine or similar online tool to experiment with making their own .tflite file; and to have the example code in the SDK, say, recognize an entirely different set of objects. And, as usual, their grumpy mechE, non-software, coach is gonna struggle helping them. And I have several, not well structured, families of questions:

    1. What files, located where on the Control Hub file structure, need to be replaced with their new .tflite file? What's involved here? Etc Etc...
    2. Other than the obvious changes to image recognition tag name strings, to match their new object category strings in the new .tflite file, is there anything else in the example code to have to alter? or consider?
    3. Teachable Machine, at least, doesn't seem like it trains with segmented images. Just lots of video frames. Absent this, will the bounding boxes and coordinates in the example code still appear on the DS? What about multiple object detections of differing types then? Etc etc etc...

    thanks in advance.

    Coach Z

  • #2
    My team tried using Teachable Machine this last season, but ultimately decided that it wasn't a good fit for what they wanted.
    I'm a software engineer so I *should* be more up on what they were doing, but to be honest I didn't supervise this experiment very closely.

    From what I understood, the way the Teachable Machine training worked is that it would take a series of images from an attached USB camera. They just plugged the robot camera into the laptop. Then it would create a model from a series of images. They'd move the bot around so it got the target from different angles, distances, etc.

    What they wanted to do was detect the goal tower and find out where it was in the frame so they could aim the robot at it. However the way the model was working was more of a binary yes the image is like the training model or no it isn't. So they could tell if the goal was in the image or not, but not where it was. They were using the "Image" trainer. I believe that Training Machine does not handle "Object" type training which is what I believe we would want to detect a ring or the goal. With object training you'd have to have images as well as drawing around the object you want it to pay attention to.

    After their experiments and a bit of reading about Tensor Flow I came to the same conclusion that they did - the Teachable Machine wouldn't do the training type they needed. They'd have to use the regular/full tensor flow training and at the time they didn't have enough time before the next tournament to learn all of that.

    For help with training tensor flow - I would suggest going to the FTC Discord server. There are some really sharp individuals there that can probably help. I sometimes go there myself, but usually I just point the team towards it and often they get an answer to their problem in just minutes. (Note - it can get a bit profane for some of the more conservative/younger team members)

    Comment


    • #3
      Coach Z,

      I'm the engineer who does the TensorFlow support in the FTC SDK. I have an idea that might be helpful. Can you send me a private message?

      -Liz Looney

      Comment


      • #4
        Liz Looney -

        I seem to be disallowed from sending a PM on this platform. I get some sort of "Not authorized to view this page" message. I am really interested in your suggestions and guidance though. Perhaps we could just move this discuss to email?

        I am [email protected]

        Thanks in advance!

        Zain Saidin

        Comment


        • #5
          Hi, I have group of students who are interested in using Teachable Machine and exporting that into the javascript. I know this is crazy but could anyone point to the steps that they need to start with. The whole Teachable Machine is easy but where do they export and import all their data to to get it working. Sorry that might be a loaded question. You could email me at [email protected]. Thanks.

          Comment


          • #6
            I don't want to disappoint you, but at the same time I don't want your students to waste time and effort on something that might not be possible.

            I don't think that Teachable Machine will produce an Object Detection model that can used with the FTC SDK. I may have out-of-date information, but I thought it only produced Image Classification models. The difference is that an Object Detection model is one where it tells you where in the image your object(s) are, not just that the image is a particular class.

            Also, if you export javascript, how are you going to use that javascript in your opmodes? It is possible to run javascript on an Android device using a WebView component, but it is not trivial to do so and I'm not sure you can do it without significantly modifying the robot controller app.

            Comment


            • #7
              One more thought... There is a library that would allow you to use an image classification model with FTC. I haven't used it, but it is here: https://github.com/OutoftheBoxFTC/EasyTensorflowAPI

              Maybe you can use Teachable Machines to make an image classification model and then use EasyTensorflowAPI to use the model.

              Comment


              • #8
                We are testing the teachable approach as well - no success thus far. Looks like Robotics eagles were successful - https://www.youtube.com/watch?v=aeMWWvteF2U&t=2s. There is a link their code repository in the description.

                Comment

                Working...
                X