Announcement

Collapse

Technology Forum Has Moved!

The FIRST Tech Challenge Technology forum has moved to a new location! Please take a look at our forum blog for links and instructions on how to access the new forum.

The official blog of the FIRST Tech Challenge - a STEM robotics programs for students grades 7-12.


Note that volunteers (except for FTA/WTA/CSA will still access their role specific forum through this site. The blog also outlines how to access the volunteer forums.
See more
See less

Skystone Tensor Flow Objects Merging Issue with SDK 5.2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Skystone Tensor Flow Objects Merging Issue with SDK 5.2

    As Tensor Flow initializes and searches for the Skystone, it is incapable of differentiating the Skystone from the stones next to it. It instead merges the stones together as shown in the image below. This can even prevent Tensor Flow from detecting that there is a visible Skystone in the first place. When the Skystone is placed in the middle, it merges with the stone to its left. The merging can be solved on the left and right if you detect the Skystone after driving within 6 inches of it; However, the same cannot be said for when the Skystone is placed in the middle. Tensor flow is only accurate when the Skystone is placed in a configuration that is not relative to the game.

    Is anyone else experiencing this issue and are there any recommendations for this issue?

    IMG_20191021_182112796_MP.jpgIMG_20191021_181855963_MP.jpg

  • #2
    We were having the same problem. You can use the returned positioning data to figure out the ratio of width to height. With this you can tell if it is detecting 1, 2, or 3 stones (or 1-1/2, or 2-1/2...). However you can't tell where exactly the Skystone is inside that area, so it really only tells you that you don't have an accurate reading yet.

    We finally gave up & are trying just Vuforia, as we couldn't get a reliable method for Tensorflow. This is better, but same problem as last year - you have to be so close to the target that it's barely better than a color sensor.

    Comment


    • #3
      FTC6382
      I think the problem is that the default detection threshold is too high. TensorFlow is being too careful/selective in what it identifies as a Stone or Skystone.

      If you decrease the detection threshold in the sample op mode, your controller will more reliably detect the stones/skystones adjacent to your target element:





      Check out the threads above for more info. One is on Reddit, the other is on this very forum.

      Tom

      Comment


      • #4
        We spent a lot of time working with the threshold as well, but just couldn't get it consistent enough that we dare to trust it. Sometimes it would detect properly, but then the numbers would bounce all over making it difficult to figure out what it's really seeing. If others are getting it to work, then perhaps the lighting or camera being used makes a difference as well (we have MotoG4)?

        We also worried about what would happen in the tournament environment with other robots, refs, and spectators moving around in the background.

        Comment


        • #5
          FLARE - I think background images are a legit concern. Vuforia is nice because it specifically looks for the target image and even if it only sees a portion of the image it can still identify the target and then localize off of the target.

          With TensorFlow, I find that the detection is accurate If the camera is reasonably close (18" or less) to the target. Placing the camera on a slightly downwards angle can help avoid seeing other objects in the background. Also, unfortunately, with the Android devices we have, the detection rate is fairly slow so your robot has to move slowly, or move, then stop and attempt the TensorFlow object detection while the robot is stationary.

          If you have concerns about TensorFlow, then definitely check out Vuforia.

          Comment


          • #6
            Originally posted by Tom Eng View Post
            I find that the detection is accurate If the camera is reasonably close (18" or less) to the target.
            This is also concerning due to limitations it puts on the build designs. Mecanums are very cool & they carried us to Detroit quite effectively for the past 2 seasons; However, we were hoping to try a different design this year, just for additional learning opportunities on both the hardware & software sides. Having to be so close to the target (even with Vuforia, 24" is about max), really pushes teams toward a drivetrain that can strafe though.

            Still trying to decide the best balance between creativity & learning potential vs what will be the most effective during competition. Decisions, decisions...

            Comment


            • #7
              FLARE The CameraDevice setField methods offer control over camera settings; this varies between phone models. With the Moto G4, one can use this to get optical zoom. This has allowed us to pick up the targets with Vuforia at a significantly greater distance than without zoom. It does affect the pose matrices, but it is still possible to distinguish left vs center vs right target.

              Comment


              • #8
                Originally posted by jkenney View Post
                FLARE The CameraDevice setField methods offer control over camera settings; this varies between phone models. With the Moto G4, one can use this to get optical zoom. This has allowed us to pick up the targets with Vuforia at a significantly greater distance than without zoom. It does affect the pose matrices, but it is still possible to distinguish left vs center vs right target.
                We haven't looked into any of the camera settings, but will do a little research on that end. Thanks!!

                Comment

                Working...
                X