Announcement

Collapse
No announcement yet.

SR1 Stacking After Week 8

Collapse

Support The Site!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by boatcapt View Post

    Three loss team in...BOLD prediction!
    I'm pretty confident that if the regional rankings went far enough, Findlay would be the highest GMAC team. Two of their losses are to Ferris State and Lindenwood, and their W%+SOS is better than Tiffin's and close enough to ODU's for the head-to-head to be decisive.

    Comment


    • #17
      Originally posted by Inkblot View Post

      This set of SR1 rankings differs from W%+SOS in only two ways: swapping Kutztown and Shepherd, and putting Charleston in at 10th.
      So you're saying the other selection criteria were not considered?

      I wonder what IUP and Kutz SOS are?

      Comment


      • #18
        Originally posted by boatcapt View Post

        So you're saying the other selection criteria were not considered?

        I wonder what IUP and Kutz SOS are?
        IUP's previously stated. But here https://www.ncaa.com/rankings/footba...ional-rankings
        Scroll to the bottom and you'll find the current worksheets for all of the regions.

        Comment


        • #19
          Originally posted by Horror Child View Post

          IUP's previously stated. But here https://www.ncaa.com/rankings/footba...ional-rankings
          Scroll to the bottom and you'll find the current worksheets for all of the regions.
          There is a .106 delta between UC/NDC and .016 for Kutz/Shep so clearly the imaginary line were Head-to-head results come into play is some point less than .106 and at or above .016.

          Couple of other observations:

          1. People seem to always want to come up with some form of "formula" that the committee uses to evaluate all teams equally and fairly. This year they seem to have settled on "The rankings are based on W/L% added to SOS with Head-to-Head results added in if necessary." But for this to be true, it has to produce the appropriate results for all teams. When you get to the bottom of the Top 10, it is clear that this is NOT the "formula" used to rank the 10-13 teams. So you are left with the undeniable fact that the committee either used a different formula to rank 1-9 and 10-13 OR that the formula is not just W/L%+SOS."

          2. While not included in the rankings, Frostburg would be in the 5 slot.

          3. Not listed in the SR1 sheet is UNCP. They are shown on the SR2 sheet.
          Last edited by boatcapt; 10-27-2021, 08:09 AM.

          Comment


          • #20
            Originally posted by boatcapt View Post

            There is a .106 delta between UC/NDC and .016 for Kutz/Shep so clearly the imaginary line were Head-to-head results come into play is some point less than .106 and at or above .016.

            Couple of other observations:

            1. People seem to always want to come up with some form of "formula" that the committee uses to evaluate all teams equally and fairly. This year they seem to have settled on "The rankings are based on W/L% added to SOS with Head-to-Head results added in if necessary." But for this to be true, it has to produce the appropriate results for all teams. When you get to the bottom of the Top 10, it is clear that this is NOT the "formula" used to rank the 10-13 teams. So you are left with the undeniable fact that the committee either used a different formula to rank 1-9 and 10-13 OR that the formula is not just W/L%+SOS."

            2. While not included in the rankings, Frostburg would be in the 5 slot.

            3. Not listed in the SR1 sheet is UNCP. They are shown on the SR2 sheet.
            Frostburg would likely be 6th thanks to NDC's head-to-head win. I agree that Charleston at 10th is in inconsistent, and my best guess is that they're the only one with a win over anyone in the top 9.

            Comment


            • #21
              Originally posted by Inkblot View Post

              Frostburg would likely be 6th thanks to NDC's head-to-head win. I agree that Charleston at 10th is in inconsistent, and my best guess is that they're the only one with a win over anyone in the top 9.
              But when people say that the formula is W/L%+SOS, they are wrong. When have to throw in other criteria to get to the actual listed top 10, that clearly shows that there is no standardized formula that is applied to all teams equally.

              Comment


              • #22
                Originally posted by boatcapt View Post

                But when people say that the formula is W/L%+SOS, they are wrong. When have to throw in other criteria to get to the actual listed top 10, that clearly shows that there is no standardized formula that is applied to all teams equally.
                Going on 15 years now. You've made your point Boat, we all get that there is no exact formula. You relentlessly remind us every year. Their is human element, it is what it is.

                Comment


                • #23
                  Originally posted by Ram040506 View Post

                  Going on 15 years now. You've made your point Boat, we all get that there is no exact formula. You relentlessly remind us every year. Their is human element, it is what it is.
                  I didn't get up on this soap box until at least 2009 so it would be 12 years and not 15! I remember when I started people were saying the NCAA measured each team and each conference equally and without bias or favoratism. If a particular conference got four team into the playoffs and your conference got one, that's just what the numbers said. Every year someone would come onto the board with spreadsheets and formulas that 'proved" beyond a shadow of a doubt that there was a secret NCAA regional ranking/playoff selection formula and that they had broken the code...until the next week/year when their "formula" produced wildly different results. I consider it a major victory that almost everyone agrees that there is no formula against which every team is measured impartially. I remember being called all sorts of foul names when I had the audacity to even imply that the NCAA could possibly be anything but completely fair nd impartial in their playoff selections!

                  As for getting back on the soap box every year, I think it bears repeating when people start saying things like the NCAA formula is clearly A+B=C...and people believe that to be true.

                  Bottom line, no formula...just a list of criteria that the selection committee may use but with no comparitive value given to any criteria. Theoretically, a committee member could say, yes, I considered W/L%...and then dismissed it completely,and he/she would technically be in compliance with the rules.

                  Comment


                  • #24
                    Originally posted by boatcapt View Post

                    I didn't get up on this soap box until at least 2009 so it would be 12 years and not 15! I remember when I started people were saying the NCAA measured each team and each conference equally and without bias or favoratism. If a particular conference got four team into the playoffs and your conference got one, that's just what the numbers said. Every year someone would come onto the board with spreadsheets and formulas that 'proved" beyond a shadow of a doubt that there was a secret NCAA regional ranking/playoff selection formula and that they had broken the code...until the next week/year when their "formula" produced wildly different results. I consider it a major victory that almost everyone agrees that there is no formula against which every team is measured impartially. I remember being called all sorts of foul names when I had the audacity to even imply that the NCAA could possibly be anything but completely fair nd impartial in their playoff selections!

                    As for getting back on the soap box every year, I think it bears repeating when people start saying things like the NCAA formula is clearly A+B=C...and people believe that to be true.

                    Bottom line, no formula...just a list of criteria that the selection committee may use but with no comparitive value given to any criteria. Theoretically, a committee member could say, yes, I considered W/L%...and then dismissed it completely,and he/she would technically be in compliance with the rules.
                    In a championship manual from a previous year, it explicitly stated :

                    An easy way to give "equal weighting" to factors is to sum them (or average them, but sum is fine for these purposes) and order the results. The line is no longer in he championship manual, but doing so and then considering head-to-head results sure seems to make the list reasonably easy to predict, or "spot on" as some might say.

                    What if you learned that the regional advisory and the national committees were not given school/team names next to the criteria, just "Team A", "Team B", etc, then told to order the teams. Only after the ordering was complete were the school/team names revealed and then head-to-head situations were resolved for the final rankings fo the week.

                    Comment


                    • #25
                      Originally posted by Horror Child View Post

                      In a championship manual from a previous year, it explicitly stated :



                      An easy way to give "equal weighting" to factors is to sum them (or average them, but sum is fine for these purposes) and order the results. The line is no longer in he championship manual, but doing so and then considering head-to-head results sure seems to make the list reasonably easy to predict, or "spot on" as some might say.

                      What if you learned that the regional advisory and the national committees were not given school/team names next to the criteria, just "Team A", "Team B", etc, then told to order the teams. Only after the ordering was complete were the school/team names revealed and then head-to-head situations were resolved for the final rankings fo the week.
                      I note that passage is explicitly left OUT of the 2021-22 pre-championship manual. Soooooooo...

                      I also note that even if your contention is that the selection committee is required to give "equal weight" to W/L% and SOS, that doesn't address how much weight the give to the OTHER Required Selection Criteria. You know, In'Region Winning % and Head to Head Results and Results vs Common Opponents to say NOTHING about the selection criteria that the Selection Committee MAY consider like DII Results vs Teams With a Winning Record, Performance Indicator and Results vs Ranked Opponents.

                      And then there is that innocent little sentence at the end of the criteria "Additionally, input is provided by the regional advisory committee for consideration by the football committee." That one little sentence basically allows the Regional Committee to weigh in and provide what ever input they want to...written or not.

                      Comment

                      Ad3

                      Collapse
                      Working...
                      X