On Authenticity and Assessment

Wow. Over two weeks since my last post? My how time flies when you’re juggling the start of a new semester of graduate school while simultaneously getting oriented to your new practicum site! While I am still feeling a little overwhelmed and am working out my schedule, I’m happy to report that my practicum placement for the year is off to a great start. I’m in an elementary school, grades 5-6, with a supervisor who has years of experience and a lot of wisdom to impart. I even have my own desk in the office I share with the school psychology intern, who thoughtfully brings mini Ghirardelli dark chocolate squares to help us get through the afternoon doldrums. Plus, unlike a lot of the rest of the building, our office has working air conditioning and we’re next to the music room so we get to hear kids singing and dancing throughout the day. Life is good.

As part of my orientation for the beginning of the year, I’ve been reviewing student files and perusing past evaluations to get an idea of best practices. To enhance this process, my supervisor has shared with me a handful of reports she feels are good examples of what not to do when evaluating a child in an academic setting. These are evaluations that came from outside the school, usually from private practitioners whom parents hired to assess their children. As I may have mentioned, before shifting to school psychology I worked for years doing forensic evaluations of juvenile offenders, and then later intelligence and achievement testing of elementary-aged children to determine eligibility for gifted schools. So I have both written and read a fair number of psychological reports in my career.

One thing I can say definitively is that every psychologist I have ever met has their own standards for how to write a report, and every single one can point out something they would do differently in someone else’s report. This seems to be the nature of the field, and it’s not a bad thing: looking critically at others’ work can help you cultivate your own values and techniques. For example, at times I have agreed with a clinician’s critique of another’s work, while at other times I’ve thought to myself “That’s actually how I would have done that.” Both situations are valuable because they put you in a position of consciously deciding what your personal assessment style will be. As one of my professors exhorts us weekly, “Be authentic!” Knowing what you want to do and why you want to do it – and then actually doing it – is a great way to be authentic.

In an assessment situation, authenticity comes from using the appropriate assessment tools and then being able explain their results. Put another way, what is the utility of the assessment tools you’ve selected, and are the tools meaningful? I’ve been thinking about this because one of the sample reports I looked at appeared to have an authenticity gap. Just to give you a little context, when writing a psychoeducational evaluation it is standard practice for the clinician to list the assessment tools they used on the first or second page, and then in the body of the report go into more detail about what each assessment contributed to the overall picture of the child they are working with. [note: I know ‘they’ and ‘their’ is not technically correct, but I am falling back on the unwritten rule that it can be acceptable to be grammatically awkward for the sake of gender neutrality]

On the report that caught my attention (issued by a private psychologist not working for the schools), the list of assessments included 19 separate instruments while the report was just 10 pages long. The ratio of tests-to-pages seemed awfully high, and I immediately wondered how the clinician could possibly explain the results of 19 tests in only 10 pages. To offer perspective, when I was writing forensic evaluations I might use two or three assessment tools which would result in a six or seven page report; when I was doing intelligence testing, administering four or five assessments resulted in an eight or nine page report (just an average; of course each specific evaluation was unique).

“Everything But the Kitchen Sink” glass beads mix.

Assessing a child is something you do deliberately and for a reason. Before you even sit down with the child, you should have an idea of what you want to be looking at and why. As the assessment proceeds, you may want to add or subtract a tool here or there, depending on the information you are getting. But what you do not want to do is start out by throwing every assessment you know at the child to see what “sticks.”

So why did this clinician approach psychoeducational assessment the way they did? The best case scenario is that they actually felt that each and every one of those 19 assessment tools was necessary, and hoped that, combined, they would offer a comprehensive vision of the child in question. Arguments for the best case scenario hypothesis are that this is an experienced clinician, they have a Ph.D., and they most likely have been trained on a variety of assessment tools. Arguments against the best case scenario hypothesis are that not all of the assessment given were included in the final report, which calls into question their necessity in the first place. So in the worst case scenario, there is the possibility that this clinician was taking a short cut and didn’t put unique effort into the assessment.

By ‘short cut’ I mean that the clinician has a standard playbook they turn to when doing a certain kind of assessment. This approach is not inherently bad; years of experience might have led them to compile a “greatest hits” of assessment tools. But I would argue that a generic group of tests that one might use as a starting point should contain far fewer than 19 tests. Again, based on my experience – and repeating the caveat that each case is different – I can think of two or three assessment tools I might be able to justify including in a “greatest hits” compilation. I cannot think of 19.

In the medical profession, excessive testing is an ongoing topic of discussion and debate. Whether the concern is about subjecting children to too many medical tests, or grown men and women, there is an understanding that it is possible to do more harm than good in certain medical scenarios. One recent news report even describes an “epidemic” of overtreatment. And, the propensity for ordering expensive tests has led some to look poorly upon doctors and accuse them of financial greed.

As school psychologists paid a pre-determined salary, nobody could reasonably accuse us of being motivated by greed; unlike private clinicians, we don’t get paid on a per-assessment basis. But what if giving a child so many tests leads parents or teachers or other clinicians – or, worst of all, the children we work with – to suspect something else: that we didn’t know what we were doing and had no clear plan going in to the assessment. I doubt any clinician, no matter the setting, would want to be viewed that way. And as with excessive medical testing, there could be negative repercussions if school psychologists over-assess their student “clients.” For example, what if administering inappropriate and excessive assessment tools leads us to gather information we didn’t need in order to answer the original referral question? There are issues of privacy and ethics and trust at stake, and the potential to harm our – and our profession’s – reputation if we are viewed as treading too heavily in any one of those areas.

In the end, I can’t say what the psychologist was thinking when they picked that specific battery of 19 tests to administer. I also cannot say definitively that any one of those tests should have been left out. Not knowing the clinician’s thought process, it is perfectly reasonable to give them the benefit of the doubt. But it is also reasonable to conclude they did things in a way I would not. Either way, being exposed to a variety of testing styles and methods is a valuable exercise and can only help me hone my skills in the field.