kitchen table math, the sequel: software
Showing posts with label software. Show all posts
Showing posts with label software. Show all posts

Thursday, February 21, 2013

Devlin's Lament: the symbol barrier

(Cross-posted at Out In Left Field)

In an article in the most recent issue of American Scientist entitled "The Music of Math Games," Keith Devlin (head of the Human-Sciences and Technologies Advanced Research Institute at Stanford University and NPR's "math guy") says that learning math should be like learning to play the piano. In doing so, he recalls (but does not credit) Paul Lockhart's Lament ("A piano student's lament: how music lessons cheat us out of our second most fascinating and imaginative art form"), which I blogged about here.

Though Devlin is no literary virtuoso, not all of what he writes here is mushy metaphor. He begins with a discussion of educational software, and here his points are clear and consistent with my own experience. Most "math games" and "math education" software programs I've seen don't make mathematics an organic part of the games or activities. Instead, math problems--mostly arithmetic problems of the "mere calculation" variety--are shoe-horned into non-mathematical situations. Here they serve simply as tasks you must complete before moving through the current non-mathematical activity or on to the next non-mathematical activity.

As Devlin writes:
To build an engaging game that also supports good mathematics learning requires... understanding, at a deep level, what mathematics is, how and why people learn and do mathematics, how to get and keep them engaged in their learning, and how to represent the mathematics on the platform on which the game will be played.
The same is true of language learning. Most linguistic software taps only superficial aspects of language, and, as I know from personal experience, it takes great effort to build a program that does more than that.

Where I begin to part ways with Mr. Devlin is in his discussion of traditional math and what he thinks is an excessive emphasis on symbols:
Many people have come to believe mathematics is the memorization of, and mastery at using, various formulas and symbolic procedures to solve encapsulated and essentially artificial problems. Such people typically have that impression of math because they have never been shown anything else...
...
By and large, the public identifies doing math with writing symbols, often obscure symbols. Why do they make that automatic identification? A large part of the explanation is that much of the time they spent in the school mathematics classroom was devoted to the development of correct symbolic manipulation skills, and symbol-filled books are the standard way to store and distribute mathematical knowledge. So we have gotten used to the fact that mathematics is presented to us by way of symbolic expressions.
This approach to math, Devlin suggests, is at odds with the resolutions of a "blue-ribbon panel of experts" serving on the National Research Council’s Mathematics Learning Study Committee ("Adding it Up: Helping Children Learn Mathematics," National Academies Press, 2001). In Devlin's words: these resolutions hold that math proficiency consists of:
the aggregate of mathematical knowledge, skills, developed abilities, habits of mind and attitudes that are essential ingredients for life in the 21st century. They break this aggregate down to what they describe as “five tightly interwoven” threads. The first is conceptual understanding, the comprehension of mathematical concepts, operations and relations. The second is procedural fluency, defined as skill in carrying out arithmetical procedures accurately, efficiently, flexibly and appropriately. Third is strategic competence, or the ability to formulate, represent and solve mathematical problems arising in real-world situations. Fourth is adaptive reasoning—the capacity for logical thought, reflection, explanation and justification. Finally there’s productive disposition, a habitual inclination to see mathematics as sensible, useful and worthwhile, combined with a confidence in one’s own ability to master the material.
Ah, "21st century skills," "habits of mind," "conceptual understanding," "real-world situations," "explanation," "disposition"...--all this makes me wonder about the ratio of mathematicians to math eduation "experts" on this blue-ribbon panel. (It should be noted that Devlin himself is not, strictly speaking, a mathematician; he holds a Ph.D. in logic from the University of Bristol, and, while affiliated with Stanford, is not a member of the Stanford math department.)

Standing in the way of these lofty goals is what Devlin calls the "symbol barrier":
For the entire history of organized mathematics instruction, where we had no alternative to using static, symbolic expressions on flat surfaces to store and distribute mathematical knowledge, that barrier has prevented millions of people from becoming proficient in a cognitive skill set of evident major importance in today’s world, on a par with the ability to read and write.
To the rescue comes... Devlin's math education software program:
With video games, we can circumvent the barrier. Because video games are dynamic, interactive and controlled by the user yet designed by the developer, they are the perfect medium for representing everyday mathematics, allowing direct access to the mathematics (bypassing the symbols) in the same direct way that a piano provides direct access to the music.
Devlin's notion that a well-designed math video game can help students meet the National Academy's goals for math education rests on two assumptions. One is that students can achieve a sufficient level of mastery in mathematics without symbols. The other is that playing such video games is to math what playing the piano is to music.

To address the first claim, Devlin elaborates the analogy to music:
Just how essential are those symbols? After all, until the invention of various kinds of recording devices, symbolic musical notation was the only way to store and distribute music, yet no one ever confuses music with a musical score.
...
Just as music is created and enjoyed within the mind, so too is mathematics created and carried out (and by many of us enjoyed) in the mind. At its heart, mathematics is a mental activity—a way of thinking—one that over several millennia of human history has proved to be highly beneficial to life and society.
But there's an important difference between math and music--and a reason why no one confuses music with a musical score. Music has a privileged place in subjective experience. Along with sensations like color, taste, and smell, it produces in us a characteristic, irreduceable, qualitative impression--an instance of what philosophers call "qualia." Just as there's no way to capture the subjective impression of "redness" with a graph of its electromagnetic frequency, or of "chocolate" with a 3-D model of its molecular structure, so, too, with the subjective feeling of a tonic-dominant-submediant-mediant-subdominant-tonic-subdominant-dominant chord progression. Embedded in what makes music what it is to us is the qualia of its chords and melodies.

Like most other, more abstract concepts ("heliocentric," "temporary"), mathematic concepts don't generally evoke this qualia sensation. What makes math beautiful are things like eloquence, patterns, and power. Unlike a Bach fugue translated homomorphically into, say, a collage of shapes, mathematical concepts can be be translated into different representational systems without losing their essence and beauty.

Devlin argues that while we might write down symbols in the course of doing real-life math, it is primarily a "thinking process," and that "at its heart, mathematics is a mental activity—a way of thinking." I agree. Indeed, math is much more appropriately compared with thoughts than with music. But this makes math symbols the mathematical equivalent of linguistic symbols. While thoughts, like math, can be expressed in a number of different symbol systems, you need some sort of symbol system in order to represent your own thoughts and to understand the thoughts of others.

This is especially true of abstract thoughts--and of abstract math. As Devlin himself admits, "the advanced mathematics used by scientists and engineers is intrinsically symbolic. "What isn't intrinsically symbolic, Devlin claims, is "everyday mathematics":
The kind of math important to ordinary people in their lives... is not, and it can be done in your head. Roughly speaking, everyday mathematics comprises counting, arithmetic, proportional reasoning, numerical estimation, elementary geometry and trigonometry, elementary algebra, basic probability and statistics, logical thinking, algorithm use, problem formation (modeling), problem solving, and sound calculator use. (Yes, even elementary algebra belongs in that list. The symbols are not essential.)
OK, but what does this mean for education? Are we going to decide before the end of middle school which students are going to become scientists, engineers, and mathematicians, and only help those students scale the "symbol barrier"? For a barrier it certainly is, as Devlin himself notes: "people can become highly skilled at doing mental math and yet be hopeless at its symbolic representations."

But Devlin is too busy appreciating the (well-studied) math skills of Brazilian street vendors, who do complex arithmetic calculations in their heads with 98% accuracy, and supposedly without the help of symbols (even mental ones?), to realize the educational implications of the fact that "when faced with what are (from a mathematical perspective) the very same problems, but presented in the traditional symbols, their performance drops to a mere 35 to 40 percent accuracy." No, not everyone is going to become an engineer. But not all non-engineers are going to become Brazilian street vendors.

It's ironic how deeply Devlin appreciates the difficulty that "ordinary people" have with the symbol barrier without appreciating what this says about their educational needs:
It simply is not the case that ordinary people cannot do everyday math. Rather, they cannot do symbolic everyday math. In fact, for most people, it’s not accurate to say that the problems they are presented in paper-and-pencil format are “the same as” the ones they solve fluently in a real life setting. When you read the transcripts of the ways they solve the problems in the two settings, you realize that they are doing completely different things. Only someone who has mastery of symbolic mathematics can recognize the problems encountered in the two contexts as being “the same.”
Instead of seeing this as a reason for exposing children to mathematical symbols early and often, Devlin sees this as reason to create computer games that somehow teach math non-symbolically.

He calls this "adaptive technology," a term that should raise red flags. In a recent blog post, I wrote about how assistive technology often becomes yet another excuse not to teach basic skills. Kids with dyslexia struggle mightily with the symbol system of written language; should they instead learn everything through text-to-speech and speech-to-text devices, and never learn how to read and write?

Devlin makes a few other strained comparisons to the piano:
The piano metaphor can be pursued further. There’s a widespread belief that you first have to master the basic skills to progress in mathematics. That’s total nonsense. It’s like saying you have to master musical notation and the performance of musical scales before you can start to try to play an instrument—a surefire way to put someone off music if ever there was one.
No it's not; it's like saying you have to master simple scales and exercises before you move on to Rachmaninoff.
The one difference between music and math is that whereas a single piano can be used to play almost any tune, a video game designed to play, say, addition of fractions, probably won’t be able to play multiplication of fractions. This means that the task facing the game designer is not to design one instrument but an entire orchestra.
Can one create a video game that functions "as an instrument on which a person can 'play' mathematics?"
Can this be done? Yes. I know this fact to be true because I spent almost five years working with talented and experienced game developers on a stealth project at a large video game company, trying to build such an orchestra.
What does Devlin's software do? The last two paragraphs of this article function as an extended but not very informative infomercial. Here's the most informative excerpt:
Available in early March, Wuzzit Trouble is a game where players must free the Wuzzits from the traps they’ve inadvertently wandered into inside a castle. Players must use puzzle-solving skills to gather keys that open the gearlike combination locks on the cages, while avoiding hazards.
Puzzle solving? As I argue in my last post on math games, existing games already offer some version of this, and it isn't math. This, indeed, is one of the other problems with so-called math education software.

Devlin suggests his software is different:
Unlike the majority of other casual games, it is built on top of sound mathematical principles, which means that anyone who plays it will be learning and practicing good mathematical thinking—much like a person playing a musical instrument for pleasure will at the same time learn about music.

Wuzzit Trouble might look and play like a simple arithmetic game, and indeed that is the point. But looks can be deceiving. The puzzles carry star ratings, and I have yet to achieve the maximum number of stars on some of the puzzles! (I never mastered Rachmaninov on the piano either.) The game is not designed to teach. The intention is to provide an “instrument” that, in addition to being fun to play, not only provides implicit learning but may also be used as a basis for formal learning in a scholastic setting.
If you say so. But I wonder how much it will cost schools (and society) to find out whether this latest incarnation of "math education" software helps prepare students to become mathematicians, scientists, engineers--or Brazilian street vendors.

Monday, September 17, 2012

Computerized teaching: the feedback gap

Yet another breathless account of the wonders of computerized learning appears in this weekend's New York Times Magazine in an article entitled "The Machines are Taking Over: advances in computerized tutoring are testing the faith that human contact makes for better learning."

The article opens with a scene of an actual human being tutoring a fellow species member. While her tutee works on a problem (calculating average driving speed), the tutor provides lots of interactive feedback. Neil Heffernan, the tutor's fiance, catalogued the various different types of feedback she gave under such categories as “remind the student of steps they have already completed,” “encourage the student to generalize,” “challenge a correct answer if the tutor suspects guessing”). According the the article, Heffernan then "incorporated many of these tactics into a computerized tutor," which he spent nearly two decades refining. Now called ASSISTments, it is used by by more than 100,000 students "in schools all over the country." The article describes the experience of one of these 100,000 students with the program's interactive feedback:
Tyler breezed through the first part of his homework, but 10 questions in he hit a rough patch. “Write the equation in function form: 3x-y=5,” read the problem on the screen. Tyler worked the problem out in pencil first and then typed “5-3x” into the box. The response was instantaneous: “Sorry, wrong answer.” Tyler’s shoulders slumped. He tried again, his pencil scratching the paper. Another answer — “5/3x” — yielded another error message, but a third try, with “3x-5,” worked better. “Correct!” the computer proclaimed.
In other words, it's the same old binary right-or-wrong feedback that nearly every educational software program has been using for decades. As the article notes:
In contrast to a human tutor, who has a nearly infinite number of potential responses to a student’s difficulties, the program is equipped with only a few. If a solution to a problem is typed incorrectly — say, with an extra space — the computer stubbornly returns the “Sorry, incorrect answer” message, though a human would recognize the answer as right.
True, the program is still a work in progress. But what's being refined, according to the article, isn't the feedback. Rather, it's the program's ability to detect when a student is getting bored, frustrated, or confused (via facial expression reading software, speed and accuracy of responses, and special chairs with posture sensors "to tell whether students are leaning forward with interest or lolling back in boredom."):
Once the student’s feelings are identified, the thinking goes, the computerized tutor could adjust accordingly — giving the bored student more challenging questions or reviewing fundamentals with the student who is confused.
Or "flashing messages of encouragement... or... calling up motivational videos recorded by the students’ teachers."

Also being refined is the "hint" feature, which users click on when stumped. Human beings (particularly teachers) track common wrong answers and have other human beings (particularly students) come up with helpful hints. These hints are then incorporated into the next generation of ASSISTments.

Cognitive Tutor, a more established software program that is "used by 600,000 students in 3,000 school districts around the country," also limits its feedback to hints and right-or-wrong responses.  And it, too, is being refined based on data from human users:
Every keystroke a student makes — every hesitation, every hint requested, every wrong answer — can be analyzed for clues to how the mind learns.
Ultimately, this data will be put to use not to refine feedback on particular student responses, but to help decide how to space out material and schedule periodic reviews.

But it's carefully tailored feedback on particular responses by particular students that makes human tutoring--the inspiration for all these programs--as powerful is it is.

In my earlier post on Cognitive Tutor, I wrote that programming sufficiently perspicuous feedback for mathematical problems "strikes me as even more prohibitive" than the feedback I labored for years to provide in my GrammarTrainer program. Last night I ran this impression past a mathematician friend of mine who cares a lot about effective math instruction. She emphatically concurs.

When it comes to educational software developers--as opposed to educational software users--there is some somewhat perspicuous feedback on whether their answers (answers to students' educational needs) are on track. As I write earlier, that feedback isn't particularly encouraging.

(Cross-posted at Out In Left Field).

Thursday, March 25, 2010

orangemath on math software

I've used everything, but MathScore (soon). While it seems counter-intuitive, rigor isn't among the first issues at looking at software - at least in a classroom environment. For example, a major concern is the number of questions in the database! If there aren't enough questions, then students cheat like crazy. Furthermore, the main value of online instruction - isn't the instruction - but the "continuous formative assessment."

Actually, good instruction is a problem. Several software vendors, such as ALEKS, intentionally keep the instruction brief, because it forces the students to think more in solving problems. The programs that really instruct well - they sell WELL to curriculum consultants, etc. - but students don't learn more. They do less work. This sounds bad, but consider textbooks. Students don't read them. They start on problems and flip to examples.

Much more can be said, but before developing opinions; use ALEKS first. It is by far the most robust product - with a solid AI engine - and it uses constructive responses. It may not cover enough for an independent active learner, if so; try Cognitive Tutor. If price is an issue and you work with primary SmartMath.eb.com and ixl.com are fruitful. However, for pre-K, start with Symphony Math.

If your focus is pre-Algebra, iPass from iLearn is actually a great product - a bit costly for me, and you need to buy into its 2x2, proportion approach.

With experience, you will develop bias that has nothing to do with Instruction. For example, StudyIsland has great questions and it allows users to submit corrections to questions (ALEKS has no wrong questions) but it's just multiple choice. APEX has great instructions and some constructive responses, but unlike ALEKS, the response must be exact in spelling, etc - a dumb system and APEX has numerous wrong/bad questions.

I haven't toyed with expensive solutions like Apangea. They include instruction with real person help. This is the big deal that many of you want. While Apangea uses US Math majors, the future is low cost help from India when a student gets stuck.

This was written quickly. Contact me if you have questions: orangemath@gmail.com

[snip]

OK, for those that want rigor, you should try Raffles from Heymath: http://heymath.com/main/productPrime.jsp. Heymath has excellent flash demonstrations combined with Raffles Singapore Curriculum. Spend some time on this site and see what is making Asia tick. Also, SmartMath is sold through Encyclopedia Brittanica, but it is Hong Kong's Planetii.

Math software is a worldwide competition with ALEKS and ST Math technology leaders, but there are many others. Remember, price matters when comparing.

[snip]

Another key aspect of math software is "coverage:" not all students respond to a software program well. Every classroom needs at least two programs to reach everybody (time on task, number of problems). ALEKS has great coverage - somewhere between 60-70 % with my reluctant students, adding IXL or Smartmath gives me close to 100%.

Note: I laugh when people simply compare programs. As a teacher I play to find a fit. I need at least 2 programs. If you use just one, like "I Can Learn," which has nice embedded videos for instruction, I doubt if all students succeed. Don't be confused by averages in score growth, when every student must learn.

If I don't use ALEKS, I need 3 programs. For example, I respect Cognitive Tutor a great deal, but not many of my students respond to it. They quit.

Thanks, OrangeMath!

Wednesday, May 27, 2009

James Fallows software recommendations

Just saw this comment on David Allen's blog:
I have used and even written about Personal Brain. It is intriguing, but at least for my kind of work (text heavy, lots of reference material) I find that it doesn't scale very well. But it is so distinctive and interesting that it's certainly worth a look. I find MindManager better/easier for graphically-oriented organizing, and a normal outliner, including the not-quite-normal but useful BrainStorm, to work better with straight text. And, of course, Zoot to keep track of large-volume reference material.
I use Inspiration for thinking and outlining and I love Scrivener's for writing (and for keeping everything in one place).

I'm going to check out Zoot.

What I need is ONE piece of software that keeps everything in the same 'document': notes, to-dos, research, factoids, and whatever it is I'm writing.

Scrivener's comes very close.


mind mapping templates available for download