Announcement

Collapse
No announcement yet.

Judging in FIRST Tech Challenge

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Judging in FIRST Tech Challenge

    Originally posted by [email protected] View Post

    This too needs time to flesh-out. In my opinion, an FCS system is essential.
    All good points, but I think FTC has been upfront about the need to make the events easier to host, which is not unreasonable. My experience of the tournaments is that performance on the field has very little bearing on who advances anyway. I have personally seen teams that ranked very low in the standings get advanced for no other reason than because they won an award arbitrarily bestowed by a panel of volunteer judges, or because they were picked as a friendly alliance partner for the semi-finals, and conversely I have also seen teams ranked in the top four NOT get advanced for no other reason than because they didn't make it to the finals.

    Why sweat the details of the field matches if they ultimately have very little to do with who advances?

  • #2
    I think their hand was forced by the end-of-life of the NxT. You're making an investment in technology either way; whether it's EV3 or something else.

    I will not be sorry to see the FCS go. It's gotten better over the last 2 seasons, but I've waited through many long days while FTAs wrangled with FCS problems. And the limited availability of the Samantha module has been annoying at times too.

    To my mind if you want to "fix" anything about FTC, make the judging more transparent. I know judges have a tough job already, but make them justify their choice with something more than a pun-filled paragraph. I think it would eventually lead to more consistent judging and allow teams to learn from the judging outcomes. If we have to cut down on the overall number of awards that are judged, so much the better.

    Comment


    • #3
      I am taking a wait-n-see approach to the death of the FCS. I think that getting to something more modern than the Samantha module will be a HUGE improvement. I think many of the communication problems we see are not because of the routers or FCS - but because the Samantha is not very robust in noisy environments.

      We are still talking WiFi - even if it is WiFi direct it is built on top of WiFi. That leaves us open to interference either intentional or not. When you have a lot of systems all transmitting on the same channel it will be "interesting" I have a feeling. I suspect the first few times we have a lot of teams in the same venue we will have to work out ways to make sure that everything plays nice together.

      Now - when you talk about making judging more transparent - I have to agree 100% especially with regard to giving teams feedback. This is supposed to be an educational program and FIRST people say judges can't tell the teams how they could improve. That is the opposite of what we should be doing. Teams need the feedback about what they did well and what they didn't do well so that they can learn.

      Comment


      • #4
        I wonder about this as well. My team only lost one match all year and made it to the finals in every tournament we entered but never received an award and did not make it to Super Regionals. Like you stated, teams that had a mediocre (or poor) robot advanced to Super Regionals ahead of us. I would appreciate some judges feedback so that my team can know where to improve. My kids did all the work with little (almost no) adult help. They are somewhat discouraged and don't know what areas need improvement. I understand the need to do community outreach (which we did) but it seems that certain teams volunteer simply because they know it helps them advance while that is really not the true spirit of volunteerism.

        Comment


        • #5
          Originally posted by DanOelke View Post
          Now - when you talk about making judging more transparent - I have to agree 100% especially with regard to giving teams feedback. This is supposed to be an educational program and FIRST people say judges can't tell the teams how they could improve. That is the opposite of what we should be doing. Teams need the feedback about what they did well and what they didn't do well so that they can learn.
          At the very least they could go back to allowing partners to provide feedback if they wish instead of banning all feedback. FTC <> FRC... its about learning after all.

          Comment


          • #6
            Judging in FIRST Tech Challenge

            Originally posted by Robert Van Hoose View Post

            To my mind if you want to "fix" anything about FTC, make the judging more transparent. I know judges have a tough job already, but make them justify their choice with something more than a pun-filled paragraph. I think it would eventually lead to more consistent judging and allow teams to learn from the judging outcomes. If we have to cut down on the overall number of awards that are judged, so much the better.
            Hi Robert,

            I'm going to start a thread in the community section of the forum and move the Judging posts there - I'm afraid they'll be overlooked in a forum devoted to technology. I'm very interested in hearing what you'd like the Judging scripts to include.

            Thanks for the feedback!

            JoAnn

            Comment


            • #7
              Hi Korimako and DanOelke,

              FTC made the decision that teaching teams to evaluate themselves provides them with tools and skills that will be valuable to them for the rest of their lives. A self evaluation ultimately becomes more meaningful than a summary sheet that is completed by a panel of Judges, based on how well the team interviewed on that particular day.

              If you believe we can provide resources that will make it easier for a team to do a self evaluation, please let me know what you think might help and we'll try to develop something.

              Thanks for your feedback!

              JoAnn

              Comment


              • #8
                Originally posted by FTC Cause and Effect View Post
                Hi Korimako and DanOelke,

                Thanks for your feedback!

                JoAnn
                Thanks for starting this thread,

                I just wanted to add my vote for more transparency in the judging. Our experience is limited, only two tournaments, but the team received zero feedback from either FTC or the judges. The entire process was a complete mystery, we still have no clue what criteria the judges were using, and as such the results were impossible to evaluate. I did not personally see any efforts to teach the teams "self-evaluation" or even to provide the criteria with which a team would presumably evaluate against. Even Olympic athletes get to see what number the judges hold up at the end of their performance.

                In addition, the other important point that others have asked about is the advancement criteria. That aspect of the tournaments was also extremely confusing and frustrating for our team. Taking the Olympics analogy a step further, it is as though a team has prepared for years to win points on a competition field (or shave fractions of a second off of a race result) only to be told that their score doesn't really matter.

                Is FTC a performance event, such as gymnastics or high diving, or is it a team sport? The mish-mash of advancement criteria looks like FTC is trying to do both, and ultimately not doing either one very well.

                Perhaps the competition should be two different venues (not unlike the Olympics!) where teams compete in one venue for judged awards, and in the other venue for points on the match field. An autonomous competition, one robot at a time, would lend itself well to judged awards.

                Anyway, the system as it stands seems very unfair, especially the huge disconnect between the rankings from the qualifying matches and the advancement criteria, and also the complete lack of transparency and feedback from the judging process. It also seems unfair when less qualified teams advance for no other reason than because they were picked as alliance partners for the semi-finals.

                If FTC is a team sport then it should be treated as such and top scoring teams advance. If FTC is a judged competition then it should be treated as such and teams given a score based on their performance as judged against well established and transparent criteria. If FTC is both of those things then maybe two separate venues are called for, as opposed to the current confusing and mysterious mish-mash.

                Just my thoughts, thanks for listening!
                Last edited by Jerry McManus; 04-02-2015, 03:40 PM.

                Comment


                • #9
                  As a well-experienced FIRST alum, I can see where Jerry is coming from. The whole judging session is a very mysterious process. With so much to say about the robot, teams can end up rambling or going off topic, and there is no indication of what a good judging session should follow. That said, I don't think FTC compares very well to the Olympics. FTC compares better to a triathlon, or perhaps the music awards, where artists can win prizes for best music, or they can win prizes for best breakout, or for being the crowd favorite.

                  On the combination of performance and "people skills," I like the way that FIRST works. FIRST wants a team to build a good robot, and to interact well with others. If you want to advance, you can do one of those really well, ending up as winning alliance captain or inspire/connect winners. Our team came in 24th in a 24 robot competition, and advanced off the PTC design award. However most teams will not be the best at any one thing. Instead, your average high-quality team will finish qualifying matches in 6th or 7th place, and make a lot of friends along the way. They will go pit to pit and pitch their robot to the top teams, and they will end up as a first pick. If their alliance doesn't end up winning, they're likely to win a judged award. FTC is not a contest of robot ability or of social prowess, but rather of the ability to build a good robot AND the ability to build the social environment to back it up.

                  In any case, splitting the events will likely never happen. Besides the fact that they already made one major change recently, the idea that a team can be full of sunshine and daisies at one competition and totally un-GP at the next, and still win a judged award, does and should rub many people the wrong way.

                  Cause and Effect, I see why FIRST wants self-evaluations. Prompting teams to evaluate themselves is certainly an encourageable aspect in the rest of life. However, one of the big things I would like to see from judging, that would require little to no work for most volunteers, is a set of videos of 5 or 6 "successful" judging sessions along with a few "unsuccessful" ones. I'm sure some of the teams near FIRST HQ or Worlds would love to act out a successful judging session. A member of the GDC could write out a set of scripts for the judges and the students, and with only a day or so of work, thousands of FTC teams could see what a judging session is, and how to act during it.

                  The argument against these examples is that the GDC wouldn't want to make students feel as though they could only present in one of these 5 or 6 manners. I would say that that is not a problem. Experienced teams will have no trouble coming up with and using their own judging outline. (my team uses front to back design explanations, followed by outreach) New teams, on the other hand, would greatly benefit from following one of the examples. In short, while new teams might feel that way, that wouldn't be a bad thing. New teams need guidance, and as they become experienced teams, they will come into their own.

                  Comment


                  • #10
                    Originally posted by FTC Cause and Effect View Post
                    Hi Korimako and DanOelke,

                    FTC made the decision that teaching teams to evaluate themselves provides them with tools and skills that will be valuable to them for the rest of their lives. A self evaluation ultimately becomes more meaningful than a summary sheet that is completed by a panel of Judges, based on how well the team interviewed on that particular day.

                    If you believe we can provide resources that will make it easier for a team to do a self evaluation, please let me know what you think might help and we'll try to develop something.

                    Thanks for your feedback!

                    JoAnn
                    Thank you for the response. It's good to hear the reasoning rather than have to speculate. Seems like the next step would be to evaluate how well the self evaluation works, are there any plans to collect non-anecdotal input on this?

                    Comment


                    • #11
                      So many thoughts.

                      I agree that self evaluation is important. However, feedback is important too. You are not going to advance very far in your career if you sit in your own bubble. Twice-yearly reviews are standard at companies. And if you truly want to do a good job, you will solicit feedback more often than that. The team mentor can provide feedback, but nothing gets the students' attention more than the judge backing that up. You are not teaching the students to be productive engineers if you teach them to neither seek nor expect feedback.

                      As far as advancement.
                      I think there might be a bit too much weight given to judged awards. Perhaps a 60/40 split between on-field performance and judged awards would be fairer. Since a judged winner is always advanced before a performance winner in the current system, the split is more the other direction: 60% subjectively judged awards and 40% more objective field performance awards.
                      However, I do like the order in which field performance awards are advanced. The rankings after preliminaries have enough weight in the current system. Good scouting leads to the best evaluation of robot quality. Yes, some captains pick their friends over the best robot (and those teams often lose in the semi-finals). But more often than not, the captains pick the best robots, and those robots might be low in the rankings. Luck plays a big role in preliminary rankings, particularly with a low number of rounds. (Bad luck: you are matched against two pretty good bots with a partner who cannot move. Good luck: your opponents get an inadvertent major penalty in an otherwise close match.)

                      Comment


                      • #12
                        Good discussion, and all good ideas. It sounds like people like the idea of FTC being a judged event, I particularly like the analogy of a triathlon.

                        I think a scoring system would go a long way towards solving the problem of transparency and feedback. Teams would receive scores for their performance in the interview and possibly also for their performance on the field. It could be one overall score or broken down into design, outreach, etc.

                        That might also help clear up the advancement issue, especially if autonomous had its own stage. Points earned in the interview, points earned in autonomous program, and points earned in match play would be combined into an overall score. The idea being that teams with the highest overall scores would advance, and awards would be given based on scores in specific areas or, in the case of the inspire award, given for highest overall score. The tournaments would stay fundamentally the same, judging in the morning and autonomous / match play in the afternoon, but the match play rankings would be eliminated in favor of this overall scoring system.

                        Teams that earn high scores across the board will advance and will probably earn awards based on scores in specific areas (such as the example of the music awards).
                        Teams that only do well in one area might not advance but will probably earn an award.
                        Teams that earn "good enough" scores across the board might not earn any awards, but will probably advance.
                        Each team can then use their scores to self-evaluate.

                        Clean, simple, and totally transparent.

                        Comment


                        • #13
                          Originally posted by Jerry McManus View Post
                          Good discussion, and all good ideas. It sounds like people like the idea of FTC being a judged event, I particularly like the analogy of a triathlon.

                          I think a scoring system would go a long way towards solving the problem of transparency and feedback. Teams would receive scores for their performance in the interview and possibly also for their performance on the field. It could be one overall score or broken down into design, outreach, etc.

                          That might also help clear up the advancement issue, especially if autonomous had its own stage. Points earned in the interview, points earned in autonomous program, and points earned in match play would be combined into an overall score. The idea being that teams with the highest overall scores would advance, and awards would be given based on scores in specific areas or, in the case of the inspire award, given for highest overall score. The tournaments would stay fundamentally the same, judging in the morning and autonomous / match play in the afternoon, but the match play rankings would be eliminated in favor of this overall scoring system.

                          Teams that earn high scores across the board will advance and will probably earn awards based on scores in specific areas (such as the example of the music awards).
                          Teams that only do well in one area might not advance but will probably earn an award.
                          Teams that earn "good enough" scores across the board might not earn any awards, but will probably advance.
                          Each team can then use their scores to self-evaluate.

                          Clean, simple, and totally transparent.
                          I think you're getting at the root of the matter. A scorecard/points system for judging would be a step in the right direction. Or perhaps a published rubric that guides the judging process. I realize there are downsides to published guidelines; you risk pushing teams in a certain direction and limiting innovation. As a prior comment noted, advancement is weighted more on the judged awards than on the competition results. If the 60/40 judging/playing split continues, then judging needs to be opened up. Alternatively, de-emphasize the importance of judging on advancement. We've all seen the controversies in sports like figure-skating where judging results don't align with spectator perceptions of performance. It opens the door to grumbling and discontent, rumors of bias, etc. As FTC participation grows these sorts of conflicts will become more likely, I think.

                          Comment


                          • #14
                            Originally posted by 3493FTC View Post
                            As a well-experienced FIRST alum, I can see where Jerry is coming from. The whole judging session is a very mysterious process. With so much to say about the robot, teams can end up rambling or going off topic, and there is no indication of what a good judging session should follow. That said, I don't think FTC compares very well to the Olympics. FTC compares better to a triathlon, or perhaps the music awards, where artists can win prizes for best music, or they can win prizes for best breakout, or for being the crowd favorite.

                            On the combination of performance and "people skills," I like the way that FIRST works. FIRST wants a team to build a good robot, and to interact well with others. If you want to advance, you can do one of those really well, ending up as winning alliance captain or inspire/connect winners. Our team came in 24th in a 24 robot competition, and advanced off the PTC design award. However most teams will not be the best at any one thing. Instead, your average high-quality team will finish qualifying matches in 6th or 7th place, and make a lot of friends along the way. They will go pit to pit and pitch their robot to the top teams, and they will end up as a first pick. If their alliance doesn't end up winning, they're likely to win a judged award. FTC is not a contest of robot ability or of social prowess, but rather of the ability to build a good robot AND the ability to build the social environment to back it up.

                            In any case, splitting the events will likely never happen. Besides the fact that they already made one major change recently, the idea that a team can be full of sunshine and daisies at one competition and totally un-GP at the next, and still win a judged award, does and should rub many people the wrong way.

                            Cause and Effect, I see why FIRST wants self-evaluations. Prompting teams to evaluate themselves is certainly an encourageable aspect in the rest of life. However, one of the big things I would like to see from judging, that would require little to no work for most volunteers, is a set of videos of 5 or 6 "successful" judging sessions along with a few "unsuccessful" ones. I'm sure some of the teams near FIRST HQ or Worlds would love to act out a successful judging session. A member of the GDC could write out a set of scripts for the judges and the students, and with only a day or so of work, thousands of FTC teams could see what a judging session is, and how to act during it.

                            The argument against these examples is that the GDC wouldn't want to make students feel as though they could only present in one of these 5 or 6 manners. I would say that that is not a problem. Experienced teams will have no trouble coming up with and using their own judging outline. (my team uses front to back design explanations, followed by outreach) New teams, on the other hand, would greatly benefit from following one of the examples. In short, while new teams might feel that way, that wouldn't be a bad thing. New teams need guidance, and as they become experienced teams, they will come into their own.
                            Thanks 3493FTC - I think providing video examples of different ways to present to the judges (and a few 'how not to present' tips) is a great idea and is completely do-able.

                            JoAnn

                            Comment


                            • #15
                              A self evaluation ultimately becomes more meaningful than a summary sheet that is completed by a panel of Judges, based on how well the team interviewed on that particular day
                              I strongly disagree with the above quote. The self evaluation is less useful than the evaluation by the judges. A self evaluation in a vacuum by it's very nature leads to navel gazing. Without external feedback self evaluation does not help. I agree that the self evaluation is a good tool and teams should use it, but it is only one tool they should use for improvement.

                              The absence of feedback is the biggest complaints I have heard about the FIRST judging process.

                              Let me throw out a robotic analogy. With out sensors a robot can move about the field, but it is often blindly bumping into things and will sometimes (rarely) get it's objective. With sensors providing feedback that same robot can move more directly to it's objective. Even better combine that feedback with PID control and it can even more quickly achieve it's object. The key is to have a feedback mechanism and a good process to use that feedback effectively.

                              I'm an engineer by training & profession - as such I've always used feedback or dug out feedback where I can. Yes, some of the HR groups at companies I've worked with had self evaluation forms - but they have always been paired with a managers feedback. Either as an employee working with my manager, or as a manager - when we got to the formal review the self evaluations would be reviewed by both, so that we could see where it differed from the manager's assessment and then discuss why and how to improve. Now while I believe that self assessments are a useful tool if used correctly - to be honest most engineers I know ignore them because often what is written is ignored by the manager and so the self assessment becomes a joke (yes, engineers can be a cynical lot at times). What really matters to most people is the outside review - i.e. the feedback from their manager.

                              As a teacher, if I didn't give feedback to the students they would (rightfully) complain. The more comments I wrote on their homework or exams the better the liked it AND I could see in later exams that they understood the material better. On the flip side, the students filled out evaluations of me twice each semester and I was able to use the comments/feedback from those to improve my teaching. That feedback over the years made me a much better instructor.

                              Through my experience with 4-H I have mentored a lot of youth. Every year at the county and state fair youth or parents will have questions about why a particular project was judged to be above or below another project. The explaination I give is that part of the learning experience is that they learn some judges have particular preferences. Even when we have a well documented "Standard of Perfection" (common for many livestock projects) there is still some judgement as to if the strength of one project overcomes it's weaknesses better than the balance of strengths and weaknesses in another. That is why we call them "judges" - because they have the responsibility to make those judgement calls. It isn't that hard to educate the youth (& adults) that while we strive to keep the process as fair & balanced as possible it is also normal for one judge to come up with different rankings than what another judge will do.

                              Now also as part of this judging process in 4-H, we tend to prefer what we call "face to face" judging. That is the youth sit down with the judge and discuss their project. Then the judge can see not only the end product but learn about the process the youth went thru to create the project. What is the best though is that the judge will then tell the youth what they should work on for the next time. It does take some skill to make it constructive advice and not just criticism, but it isn't that hard of a skill for the judges to learn (and they are trained in it). Even in those cases where the judging is not "face to face" the best judges will line up the projects and explain to the group why certain ones are better than others. This shows everyone in the group how they can improve for the next time.

                              I've worked with youth development for a lot of years. I've taught at the graduate level for 12 years. I've worked in industry 20+years. In none of these environments was self evaluation considered an acceptable substitute for feedback. As such I reject the premise that the self evaluation form is more meaningful than getting the judges feedback.

                              Comment

                              Working...
                              X