The Concept of Conceptual Understanding

 

 

Do you suppose we could work towards a moratorium on the words "concept" and "conceptual"?  I see no place for them in the discussion of math education, or in the construction of examinations, preparation of lesson plans, etc.

 

In July of 1998 I was a member of a committee that was preparing a syllabus for the NY Regents' "Math B" exam. I recognize that I have described this committee before, and probably more than once in the scattered course of recording my memoirs; and I confess that while it was a unique experience for me it might be a familiar one to others; but since it taught me more than one lesson it deserves more than one reference.  (Richard Escobales was also a member of this committee, and I remember a third professor of  mathematics on it, making three out of a total membership of about 12, the rest being teachers or administrative officers from the schools; and I can say now that we three, who had not been conspiring in advance of the meeting, voted together every time there was something less than consensus going, and each time lost by a landslide.)

 

Our 1998 committee, then, named the topics that should be addressed, and those which, though in the NY State Standards, should not: Should there be a sheet of formulas, should calculators should be permitted, or  demanded, etc.  When we were all done with subject-matter we still had to specify what percentage of the question should be multiple-choice, how many short-answer or "extended response".  I had opinions on all these questions and so did the others, but we voted "democratically" and of course settled all these matters to the taste of the majority, all the decisions turning out to be in the direction of making the examination less searching.  Still, I understood the almost all the questions we were voting on.  Almost.

 

This was the one I couldn't make out:  How many points of a 100 point total should be awarded for questions examining "conceptual understanding", how many for "procedural skills", and what percent for "problem solving".

 

I tried to imagine an exam question about the quotient of two complex numbers or the zeros of a cosine function that tested "conceptual understanding", whatever that is, but which failed also to test the other two things, whatever they were.  It seemed to me sufficient that the questions be about mathematics. What on earth do the statisticians and psychologists do with those mystical three categories when it comes to evaluate the students' math learning at the end of the year?

 

It turned out that there wasn't time to debate all this at our actual meeting, and it was only after we all went home that, via email, we all debated the weightings and voted on them.

 

In our email discussions of the following weeks, during which we completed our work, I seriously considered telling the other committee members that I would rather not try to vote on anything concerning these (to me) ill-defined categories, but then I decided that would be insulting.  My vote really didn't matter much; I realized from our face-to-face discussions in Albany that I was after all just a decoration there, perhaps so that the State of New York could later say that all levels of educational and mathematical expertise were represented.  As I recall, I submitted 40-40-30, but which category got which number in the end I don't remember.

 

When it came to write the exam questions a few months later, that was a different bunch of people, and I was not asked to participate.

 

A discussion with some friends not too long ago reminded me of this incident, and of another one concerning “conceptual understanding” that occurred much longer ago, one that I had not forgotten by the time of my Albany meeting in 1998, but which I didn't then realize I still had the record of.  Now I see, with some correspondence of 1965 before me, how I had come, that long ago, to suspect the category conceptual understanding of meaninglessness.

 

In June of 1965 I was invited by Alice Foley, the former "curriculum specialist" but then Assistant Supervisor of the Brighton School District (in which I had two small children enrolled) to advise her about some NY State exam results that troubled her.  The results for "Beginning 6th Grade" students were particularly distressing, she told me. The Rochester suburb named *Brighton* is usually the highest-scoring district in Monroe County, but by 1965, I was given to understand, it had fallen a bit short.  So Miss Foley began by giving me a copy of each of the tests, with some charts listing the results, asking in particular for comment on the Grade 6 classes.

 

It is now, amazingly to me, the year 2008, 43 years later, yet I have the actual (mimeographed!) exam before me now, and (carbon!) copies of my letters to her. The year 1965 was right in the middle of the ascendancy of "the new math" that was so controversial at the time, but I knew little about all that, for like most mathematicians then as well as now, I was remote from the problems of math education in the schools. As I learned later, that is, after I had completed my transactions with Miss Foley, Brighton had adopted some of the "new math" elementary school curriculum materials and Miss Foley wondered if they were the cause of this drop in exam scores.

 

I wish now that I had asked more about those materials, but I didn't, and I can't say now what they were, except that the Cuisinaire rods had been introduced during my own children's time as a feature of the early grades.  But I evidently thought the examinations would speak for themselves, and that I was at least qualified to judge whether the exams that gave such low scores for Brighton children were really diagnostic of anything worthwhile.

 

Looking at these New York 1963 6th grade examination packets I see that the questions are classified as "Computation", "Problem Solving", and Concepts"!  Thus the three mysterious categories that my Math B committee had in mind in 1998 have had names of long standing.  There is a certain comforting permanence in this:  In Albany, NY at least, over the period of at least 35 years, that while other fashions in math education seemed to oscillate from “Newmath” to “Back To Basics” to “NCTM Standards”, the profession has maintained at least some definition of "conceptual understanding".  Perhaps the same definition, though one cannot be sure except by the way it plays out in student testing.

 

I look at the 1963 exam questions before me now and recognize a fairly traditional list, though with some ill-advised efforts at real-life examples, and some even more futile, often puerile, questions about logical language.  My task was to answer Miss Foley's question of whether disappointing scores on this exam meant anything, especially in the category of "conceptual understanding".

 

I didn't know what the preceding year's scores had been, but I did know that the decline had been small.  Therefore, if a substantial number of questions were poorly posed or faulty, it would follow that the decline had a good probability of being part of a random fluctuation from year to year. Miss Foley apparently didn't have a copy of the exams in question, since my file here has only some for Grades 3, 6 and 7, published in 1960, 1963 and 1962, but I accepted them as generic.  My letter of 1965 to Miss Foley is clearly a comment on a Grade 6 exam no longer in my file here.  I now quote from it:

 

December 7, 1965

Dear Miss Foley:

 

... Meanwhile, I have studied the examinations from Albany by which you deduced that the children in grades K-5 didn't seem to be learning enough.  It would be extremely interesting to me if I could see the breakdown of scores by question number, especially for the test marked Beginning Grade 6.

 

[Evidently I didn't yet realize that the "generic" tests in my possession were not the actual ones with the scores that Miss Foley was worried about. Thus even if as a whole they were representative of the ones Miss Foley was anxious about, a breakdown of results by question number on the tests before me would not have had any meaning.  Just the same, I did go on to comment on particular questions on the "Beginning Grade 6" exam, though only those from Part III.  Part I, comprising Questions 1-20 were under "computation", Part II, comprising Questions 21-40 under "problem solving", and 41-60, the part I had particularly attended to, were under Part III, "concepts".]

 

Even so, [my letter goes on] I doubt whether I could deduce very much, because I consider the examination [for Grade 6] a very bad one. It may indeed be that the children are weak on mathematical concepts, but the examination is very little evidence in that direction.  It is hard to believe that questions 56, 59 and 60 have any bearing on mathematical concepts.  Question 43 has no correct answer, Question 48 is gratuitously confusing because of the meaningless line segments between the points of the graph, and most of the other questions are answered by knowledge of rather unimportant items of jargon.  The first two parts are not so bad, though their difficulty is extremely variable..."

 

I see now that my objection to Question 48 derived from my lack of acquaintance with the notion of a "line graph", something I got to know only in 1977, when I found almost all the state standards I was then studying infected with them, as they are to this day.  That is, Question #48 allegedly presented a graph of "Average temperature" for each of the days Monday through Friday, i.e. a graph having a five point domain, displayed as equally spaced points along the x-axis, each labeled with the name of one of the five days.  The five average temperatures were 40, 50, 30, 40 and 60, indicated as five points at those heights above the domain points; however, the graph shows not just those points, but also some line segments connecting them. Such a graph is called "a line graph", and I still don't like that, and believe it is poor preparation for later work even though the newspapers sometimes print that sort of thing.

 

The actual question, by the way, was, "This graph shows the average temperature on each of 5 days.  On which day was the average the same as Thursday's average?"  The correct answer was Monday, though if the line graph really meant what its diagram showed, another correct answer could have been "Tuesday and a half".  Fortunately this was not one of the four choices.

 

The other "conceptual" questions I complained of to Miss Foley were 56, 59, and 60, viz.

 

56.  What is the total number of yards in 3 miles?

(choices were 1000, 1520, 2800, and 5280)

 

59.  Which would make the bigger package?

(1) 1 lb. of nails

(2) 1 lb. of rice

(3) 1 lb. of butter

(4) 1 lb. of feathers

 

60.  Which number best tells the height from the floor of most chair seats?

 (choices were 18, 28, 38 and 48 inches)

 

A retrospective view of these three problems, after 45 more years of teaching and a brief experience among educators in Albany, still does not tell me how #56 tests "conceptual understanding" more than computation.  A good case can be made for #56 under "mental arithmetic", however, but there is no way to enforce mental arithmetic during a written examination.  I suppose #59 tests the understanding of something, but that something isn't mathematics.  It would be answered by any child with a mental picture of the four kinds of materials, and while a teacher might think it a test of the understanding of inverse proportions, no such formulation is likely to occur to an average 5th grader – who might still be able to answer correctly.  Even less should be said of #60, which is probably the most distant from anything mathematical, no matter what grade level.  My letter to Miss Foley didn't go into such detail.

 

In that letter I suggested that I might compose a short test for entering 6th graders which would better serve the purpose of seeing whether the K-5 program was teaching concepts as I understood them, but she thanked me and said that would be more than needed for her purposes. She was happy to learn that the scores she had been worried about were not to be worried about.  Over the next few weeks she gave several talks to groups of teachers, and of parents, defending Brighton's math program against criticisms mentioning Brighton's diminished test scores in mathematics -- and citing me as the expert that had confirmed her position!

 

However, she did refer me to a 6th grade teacher who might be interested in experimenting with an alternative exam, and, still believing that the Brighton School District wanted my help in matters concerning mathematics, I did compose such a test, which is still in my files here. It still seems valid to me. In the fall of 1963 I arranged with two Brighton teachers to administer it to 3 classes of differing "ability", i.e. one average, one superior and one below average.  (Classes were labeled in those days, though not "tracked" from Day One via an "intelligence test".)  The results were a bit disappointing to me, even though I did make it hard enough to separate out all the degrees of understanding.  It had 20 questions, and nobody got all 20, while only two students got 19 correct.  The median was about 9, taking all classes together.  Even though the test was multiple-choice (5 choices, I see, while NY gets along on four), I was able to see the nature of the errors, some of them things I would not have thought of; and so I learned quite a bit from that exercise, though I never was asked by the Brighton schools to employ it, or anything else I might do, for any practical purpose; and it was many years before I got involved in school math education again.

 

And I still don't know the meaning of "conceptual understanding".

 

Ralph A. Raimi

6 August 2008

Modified 14 June 2009