Who gets the highest automated score, at least from me, is not necessarily the person I would deem most worthy of an award. These applications taken as an N=33 may expose more about America's educational process than they do about individual candidates. First, I see a fair amount of inflation, either of grades, which I assume are earned for each student, but also of course titles. If AP means college level and all kids have three AP courses each year, then the folks are being pushed too hard or college isn't hard enough. If AP courses are offered before senior year, then the known AP exam scores need to be part of the scholarship application too. The arbiter to grade and course description inflation has generally been exam scores. There are legitimate reasons for a mismatch and it is hard to tell if a mismatch reflects on the student or the school. But if there is a serious interest in having schools accountable for student performance and grades that typically far exceed test scores, the problem lies with the school. I saw a lot of that each of the three years I reviewed the data given to me, made more difficult by the increasingly optional nature of including standardized testing results in the application. SAT scores may be less valuable perhaps than Achievement Test or SAT II results which can be more easily matched with specific grades. Again, this may reflect more on the school and how the local teachers create their grades than with the students who receive those grades. A few kids from my medical school class conducted such an experiment. Since we had a note taking service, a few opted not to attend lectures but to study texts and the lecture notes. They underperformed on class tests, overperformed on Board Exams. But the scholarship applications with their divergences between grades and standard scores raise serious questions about either what is taught or how class performance is assessed.
I also got the sense that some of these kids were aiming for an intermediate future below their capacity. I didn't encounter any applying to the Ivies or related highly competitive colleges, though from performance they could have. The geographic distribution of their choices also seemed a bit more restricted than I might have expected from their level talent. Makes me wonder about the guidance they receive.
And finally, some people write more effective letters of recommendations than others. I like to read illustrative vignettes that move talent to performance. I wonder if schools coach their faculty on how to convey performance effectively, or if DCF gives a heads up to the Recommendation writers, as many are not faculty, as to what types of comments catch attention from readers. If not, this would be a useful addition to the process.
While interviews might be a tempting addition, for kids in their late teens this could be intimidating and perhaps even misleading as poise often develops as part of the college experience. There is also a volume of applications and limited experience of interviewers that might make this addition unrealistic or even detract from the student's assessment. Best to keep it all in writing on the screens for now.
No comments:
Post a Comment