Announcement

Collapse
No announcement yet.

Tensorflow Object Detection

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Tensorflow Object Detection

    Hi everyone,

    We have been using a webcam with our control hub to detect the Skystone position. However, the object detection program keeps combining the stones into one stone. Does anyone know how to fix this issue?

    Thanks!

  • #2
    Unfortunately, I believe that this is the sad truth of TensorFlow this year.

    Comment


    • #3
      @P0W3R_ZURG3 This has been discussed over on Reddit. If you decrease the threshold for the TensorFlow confidence level, you may get better results. The default minimum confidence is set to 0.8 in the sample op mode. However, decreasing it to a lower value (0.4, 0.5 or even 0.6) helps since TensorFlow is more likely to identify the adjacent Stones or Skystones next to the target Skystone.

      You can read details on reddit:
      https://www.reddit.com/r/FTC/comment..._makes_it_way/

      Also, here's YouTube video that shows the detection with the threshold set to 0.6. You can see the bounding boxes are fairly consistently centered about the Stone or Skystone:

      https://youtu.be/zyX5suaBjqg

      I hope this helps!

      Tom

      Comment


      • #4
        Tom Eng We have been trying this with a 0.3 confidence level, and it still doesn’t work. What do you think we should try?

        Comment


        • #5
          Originally posted by P0W3R_ZURG3 View Post
          Tom Eng We have been trying this with a 0.3 confidence level, and it still doesn’t work. What do you think we should try?
          @P0W3R_ZURG3 A confidence threshold of 0.3 is very low! Are you using phone as your Robot Controller or are you in a region that is allowed to use the Control Hub? If you are using a phone, can you see the boundary boxes being drawn around the elements?

          Some important things about the TensorFlow object detection (TFOD) software...

          1. The TFOD calculations are pretty computationally intensive. This means that the detection is relatively low using the typical Android devices (Motorola phones or Control Hub) used in the FIRST Tech Challenge. When you do the detection, is the robot stationary or moving rapidly? One thing to try is to do a static test with the robot not moving and try detecting the elements from a variety of static (non-moving) positions.

          2. For this season's game, whether you are using Vuforia or TFOD to find the Skystones, the camera needs to be relatively close to the elements for accurate detection (perhaps within 1.5 feet or so). How far is your camera from the target elements?

          Tom

          Comment


          • #6
            Tom Eng We are using the Control Hub. The boundary boxes are being drawn, but it is very inconsistent, as it combines two stones into one (usually the sky stone and the stone on the right of itself). Our robot is stationary, and cannot see anything but the stones. Our robot is against the wall, and is maybe a little bit farther than 1.5 feet. Any closer, and our webcam (Logitech C720) cannot see the stones we are targeting(set of 3).

            Comment


            • #7
              .
              If your robot can reliably identify a Skystone, does it need to see the full set of 3?
              .

              Comment


              • #8
                @P0W3R_ZURG3 As Westside suggested, if it can reliably identify a skystone, even if the boundaries are wider, and if the boundary box is symmetric the detection might be sufficient. Does your bounding box look like the following image? If so, you can calculate the horizontal midpoint of the Skystone and use that to localize with respect to the target.

                skystoneTFODWide.jpg


                Attached Files

                Comment


                • #9
                  The other option is to attempt to use Vuforia. the Skystone sticker is also a known Vuforia 2D image target.

                  Comment


                  • #10
                    Tom,
                    Would you explain what the different color rectangle means? I have seen Red, Green and Purple. I understand the number on the top left corner seems to be the confidence level.

                    Comment


                    • #11
                      Tom Eng Westside Thanks for your help, I think we know what to do now.

                      Comment


                      • #12
                        @P0W3R_ZURG3

                        I did some testing with a Logitech C270 camera and some side-by-side stones. I found that if the camera was about 18 inches from the stones, the camera would reliably detect the Stones and Skystones and could distinguish accurately between each element. I could slide the webcam along the row of elements and TensorFlow would accurately detect and track the elements.

                        If I moved the camera farther > 20 inches or so, I started to see less consistent bounding boxes (but the bounding boxes were still reasonable). If I moved too close, then the elements occupy the entire field of view and while TensorFlow often reliably detects the objects, it's hard to localize since the elements fill up the entire screen.

                        I think a possible TensorFlow strategy would be to move the robot into position so it's about 18 inches from the blocks. Then I would move horizontally/strafe along the elements a pause momentarily to give TensorFlow time to make the recognitions.

                        Alternately, you can use Vuforia to find the Skystones using the sticker on the front. One advantage that TFOD has, however, is that it can detect and track the Stones. Vuforia can only track the Skystones and needs to see the sticker very clearly to be able to do so.

                        Good luck with your programming!

                        Tom

                        IMG_20191017_105424.jpg
                        MVIMG_20191017_105429.jpg

                        Comment


                        • #13
                          BTW - another trick that you can do is calculate the ratio of the bounding box to the height of the box. If you assume that the height of the bounding box is reasonably consistent, then you can check to see if TensorFlow has drawn a very wide box or if the ratio is consistent with the known size of the element.

                          If the width to height ratio is very high, you can assume you have an extra wide boundary box and take actions to correct that (perhaps move closer, or move horizontally to a side in an effort to center the element, etc.).

                          Comment


                          • #14
                            Tom Eng We were able to successfully get it working yesterday. Our object detection works perfectly. Thanks for all your help!

                            Comment


                            • #15
                              Vuforia works very well with the SkyStone image, as Tom suggests.

                              Comment

                              Working...
                              X