conference notes
Oct. 11th, 2004 09:38 pmAn interesting possibility for collaboration developed tonight. I spoke for a while with another programming-minded writer who has some good ideas for modeling documentation for maintainability and modularity. We might work together on this.
On a different subject, the folks at the Institute for Intelligent Systems are doing some nifty work on assessing effectiveness of text (and other content). One of their systems, QUAID, evaluates survey questions (e.g. for pollsters, the census bureau, etc); it shouldn't be too surprising that a lot of people don't actually understand the questions before they answer them, and they confirmed this with (among things) eye-tracking devices. Anyway, the tool suggests modifications; it doesn't compose text. But it did a pretty good job on the examples we threw at it.
Along similar lines, Coh-Metrix suggests improvements to the readability of arbitrary text passages. (The "coh" is from "coherence".) Often their suggestions are not the ones that come from those "simplified reading level" efforts popular with school texts. In fact, they found that precisely following those guidelines actually hinders above-average readers, because sometimes you need to not connect all the dots to stimulate analytical thinking by the reader. Spoon-feed it and some people never actually absorb it. Interesting.
AutoTutor uses NLP techniques that seem familiar (:-) ) to build dialogue into tutoring software. So the software poses a problem to the student, the student types questions, candidate solutions, and random speculation into a window, and the tutor responds in ways that are mostly appropriate to guide that student toward the answer. They're using latent semantic analysis, comparing the student's utterances to an ideal answer. Kind of like the way Clarit retrieval works, it sounds like.
Attendance is about 50/50 writers and programmers. This is, apparently, typical for SIGDOC.
I'm getting some ideas for ways to better QA our documentation, though none fully formed yet. Tomorrow has some promising sessions toward that end.
On a different subject, the folks at the Institute for Intelligent Systems are doing some nifty work on assessing effectiveness of text (and other content). One of their systems, QUAID, evaluates survey questions (e.g. for pollsters, the census bureau, etc); it shouldn't be too surprising that a lot of people don't actually understand the questions before they answer them, and they confirmed this with (among things) eye-tracking devices. Anyway, the tool suggests modifications; it doesn't compose text. But it did a pretty good job on the examples we threw at it.
Along similar lines, Coh-Metrix suggests improvements to the readability of arbitrary text passages. (The "coh" is from "coherence".) Often their suggestions are not the ones that come from those "simplified reading level" efforts popular with school texts. In fact, they found that precisely following those guidelines actually hinders above-average readers, because sometimes you need to not connect all the dots to stimulate analytical thinking by the reader. Spoon-feed it and some people never actually absorb it. Interesting.
AutoTutor uses NLP techniques that seem familiar (:-) ) to build dialogue into tutoring software. So the software poses a problem to the student, the student types questions, candidate solutions, and random speculation into a window, and the tutor responds in ways that are mostly appropriate to guide that student toward the answer. They're using latent semantic analysis, comparing the student's utterances to an ideal answer. Kind of like the way Clarit retrieval works, it sounds like.
Attendance is about 50/50 writers and programmers. This is, apparently, typical for SIGDOC.
I'm getting some ideas for ways to better QA our documentation, though none fully formed yet. Tomorrow has some promising sessions toward that end.
(no subject)
Date: 2004-10-11 11:01 pm (UTC)You bet, "Interesting." This validates a two-prong theory that I've been mulling for years. Essentially, the drive to make thing more easily understood by the masses, actually makes people dumber, (probably in some nasty proportional way, too [the smarter you are, the greater the dumbing effect]) and after being spoon fed for too long, nobody wants to be part of an involved discourse. I offer Paul Tsongas in 1992 as an example. He told the clear truth to the people, but because truth tends to be ugly and complicated, he lost the interest of the voters, and thus the nomination.
This was brought home not long ago on "The Daily Show" when Jon Stewart scored President Clinton. He said, "When people think, Democrats win." Surprisingly, some Republican strategists agree with that assessment. This is exactly why partisan politics is so ugly and underhanded: It's flashy, easily broken down into oversimplified sound bites, distracts from real issues, and tends to pander to the less-informed and possibly the less-capable.
The adage, "if you treat them like children, then they'll act like children" appears to be true. Clearly, if the people fed information like they were morons, then they will act on that information as if they were morons.
I hate it when I scare myself.