Miriam Burstein at The Little Professor has some interesting comments about freshman composition classes, in response to a New York Times op-ed by Stanley Fish. Fish speaks rapturously of teaching freshmen writing without content by making them devise their own language.
This sounds like an interesting idea, but also one that's fraught with disaster potential. I don't think I'd like to see what this would end up being like with an instructor who wasn't completely sold on the concept. (And if this leads to a whole spate of pseudo-Tolkien novels written by people who started devising cultures to go along with their made-up languages, Fish may be in for some fearsome retribution.) I'm also not convinced that this is necessarily the One True Solution to the problem of bad student writing-- I tend to agree with the argument that the real problem is how to overcome the compartmentalizing that students are so prone to. But it's an interesting approach all the same.
Writing isn't a huge point of emphasis in physics, but I do seem to end up spending an inordinate amount of time on dealing with written work, mostly because I'm a very text-oriented person in general. So I'm always interested in hearing good suggestions for how to teach students to write.
Some weeks back at a faculty lunch, I sat down with a colleague from the English department who asked "How did you learn how to write?" (I think she'd been polling the other faculty at the table before I got there.) I had to think about it for a minute or two, because I don't really remember a time when I had difficulty writing. I remember a few how-to-write-an-essay classes back in high school, but I regarded those as a gigantic waste of time, because I already more or less knew what to do.
When it comes to technical writing, though, I can nail down where and how I learned to write, and it wasn't in a class. It was at NIST, writing up articles for publication, and going through the process of "paper torture."
I've alluded to the paper torture process a few times before, most recently when talking about journals. The basic process was as follows: whoever had been the lead person on a given experiment would write up a complete draft of a paper, and then a meeting would be arranged with all the people who were going to be listed as authors, and we would go over it word by word.
And when I saw "word by word," I mean that literally. There were always big-picture comments made in those meetings, dealing with how to structure the argument, and likely questions that would need to be addressed, but in the end, it always got down to the level of grammar and word choice. Things like "Do we really want to call this 'simple,' given that it took weeks to figure out?" or "Shouldn't this 'that' be a 'which' instead?
It was frequently excruciating (though the xenon experiment paper torture sessions never reached the heights of absurdity of the BEC experiment, where they once devoted an entire three-hour meeting to the first paragraph of a paper), but tremendously instructive. Not so much because it taught me more about the rules of writing proper English (a cursory glance through the archives will show that that stuff never really stuck), but because it got me to really think things through when writing a paper draft.
I started to internalize the paper torture process, looking at my own text with a more critical eye, and asking "What is Steve likely to complain about in this paragraph?" My first drafts still look about the same (and what you get on this blog is mostly first drafts), but I know what to take out in the second draft (all instances of "however" and "indeed," for example). By the time I was writing the squeezed state paper at Yale, I was going through something like five internal drafts for every one I passed on to my co-authors. And in the end, revision is the key to good writing, completely independent of the subject.
I explained all that to my colleague at lunch, and she thought that sounded really interesting. And then she asked "Have you tried implementing that in your classes?" Which sort of threw me.
If you think about it, though, it's not a totally ridiculous idea (though I wouldn't attempt it in the class with the freshman engineers). I have some other colleagues who swear by oral lab reports, which work for a similar reason: the students are so terrified of looking foolish in front of their friends that they actually do the background work you would like them to do all the time.
The tricky part is finding a way to make the whole thing fair, and not unduly burdensome for the instructor. The optimum arrangement would be something that broke the students into groups and got them to do the "paper torture" process themselves, without the faculty member providing the bulk of the commentary. And, of course, you also need to avoid the inherent problem of group work, namely having one student stuck doing all the work for everyone else.
You could probably make it work for something like a lab class, though, by breaking the students into groups of, say, four, and having each student take one turn as the "first author" for one experiment, with separate grades for the first draft, the comments provided by other students, and the final draft. It'd be a bitch and a half to keep straight, but there might be something to the idea.
I still don't have the foggiest idea how to deal with the compartmentalization issue (that is, getting them to apply the same process to any other class), though.
The Horizon Problem
Like many big conferences, DAMOP includes a banquet the last night of the meeting, so everybody can get together and get a free dinner out of their registration fee. They always have a keynote speaker at these things, usually somebody with some vague connection to atomic physics, who works in some other field. Past speakers have included people talking about astrophysics in Antarctica, an inadvertently hilarious discussion of astronomy, and Norman Ramsey telling funny stories about famous scientists.
This year's speaker was every scientist's favorite Bush administration shill, presidential science advisor John Marburger. His technical background is apparently in AMO physics, which I didn't know, but he mostly talked about science funding, or more specifically, why there isn't any (including an entry for the Indirect Statement Hall of Fame: "After the attacks of September 11th, the nation responded to terrorism in a manner that has incurred a great deal of expense").
His talk could be somewhat unfairly summarized as "I think you people are doing great work, and I'd fund you if I could, but nobody listens to me," with a side helping of strained justification for why it's not really a Bad Thing that Federal funding of research is decreasing. Predicatably enough, this created a bit of a stir among the audience.
Marburger surprised me a bit by taking questions after the talk, and when he responded to one question by saying that it's not a problem if Federal researh funding is dropping, because industry is funding more research than ever, something really incredible happened. I heard a loud spluttering noise, as a distinguished physicist (who literally wrote the book on his area of research) sitting two tables away nearly exploded. It's hard to really appreciate without knowing him, but Tom is one of the quietest physicists I've ever met, which is saying something.
After barely containing himself at the table, he made his way up to the front of the room, and when he got the mike, asked "How can you say industrial funding has increased, given what happened to Bell Labs?" (Or words to that effect).
For those who aren't up on the history of research, Bell Labs was the wide-ranging research arm of the Telephone Company, back when there was only one, and is responsible for many of the inventions that shaped the world we live in. Their best know products are the transistor and the laser, and with a bit of Googling, you can find a host of sites (official and unofficial) listing other Bell Labs inventions, from solar cells to radio astronomy, to UNIX. My own research field, laser cooling, has a Bell Labs connection through Steve Chu and Art Ashkin.
Bell Labs is still around today, as a part of Lucent, but it's a shell of its former self. The funding levels aren't what they were, and there isn't the same level of free-ranging basic research-- they're much more focussed on commercial products and the bottom line than they were in their heyday. Scientists don't speak of the current Bell Labs with the same sort of reverence that they do the older version.
The story of Bell Labs is one of many-- Westinghouse, GE, Xerox, and a number of other companies had research labs working in the same mode as Bell Labs, back in the day, and every one of them was reined in and scaled back through the 80's and 90's. Which was Tom's point-- while there's still industrial funding of research, it's not a replacement for the basic research funding that's provided by government bodies like the National Science Foundation or Department of Energy, not any more. Private companies used to fund basic science, but they don't any more.
I don't think it's a coincidence that the fall of the great corporate research labs started at the same time as the current Wall Street ascendency. In today's "what have you done for our stock price this quarter" environment, something like the classic Bell Labs is hard to justify. While the great Bell Labs inventions of the 50's and 60's have quite literally helped build the world we now live in, they're not the sort of thing that produces an immediate profit. Basic research is very much a long-term investment, the full rewards of which may not be obvious for twenty or thirty years. And great benefits thirty years down the road don't necessarily produce profits right now.
And yet, you can't continue to make technical progress without that investment in basic research. Which is probably the biggest problem I have with modern corporate culture: the horizon they use is too short. You may turn bigger profits in the short run by scaling back your research operation, but ten or twenty years out, you may end up regretting it. But there's no reward for anything other than maximum short-term gain-- the people making funding decisions now probably won't be around in twenty years' time, so the quick and easy gain looks like a really good deal.
(This is, incidentally, probably the biggest reason why I'm opposed to almost all "market-based reform" schemes. The easiest way to make money in the short term is to screw things up for the long term, but it's hard to find ways to keep people focussed on future health rather than current profits.
(And running with the tangent just a little farther, this is also why I don't share Kevin Drum's faith in the inevitability of national health care. Kevin's right that the rising costs of health insurance will eventually force businesses to do something, but I'm afraid that they're more likely to take the Wal-Mart approach and just drop coverage than to do the sensible thing and support a real reform of the system. In the long term, national health care is the rational thing to do, but there's a bigger short-term gain to just quitting the game.)
For the last ten or fifteen years, we've been able to compensate for the loss of corporate funding for basic science by having NSF and NIH pick up a lot of the slack. With the arrival of an "MBA President," though, that's started to change, as the same short-term attitudes have been brought to the government level. There's a quick profit (well, a quick reduction in the losses due to ill-advised policies in other areas) in squeezing NSF and DOE and NIH, and the real damage won't be felt until long after the current crowd of dangerous lunatics have left power.
Things aren't yet incredibly bad in my area of physics, laregly because our research is relatively cheap, but high energy and nuclear physicists are already feeling the bite. Times are starting to get tight in AMO physics as well, and the future for American science in general isn't looking too bright.
And the really scary thing is that I'm not sure this is something that can be fixed by simply changing who controls the Federal government-- the problem of science funding has its roots in modern corporate culture, and without a wholesale change in that culture, I'm not sure real improvement is possible. And I'm not sure that change can be accomplished without a catastrophic crash.
In short, this wasn't exactly an uplifting dinner talk. But, on the bright side, they kept the bar open afterwards...
Fifty Pictures Worth?
Kieran Healy pokes fun at loquacious law professors, asking "Why do Law Professors write 50,000 word articles?" In the comments, John Quiggin points to an older Crooked Timber post by Micah Schwartzman on a revolutionary development in legal publishing: a limit of 35,000 words per article.
Coming from physics, I find this incredibly amusing. The longest article I've published is somewhere on the short side of 8,500 words, and takes up ten journal pages. That includes footnotes, figure captions, and bibliographic citations (and probably a bunch of LaTeX control codes, as I just pasted the raw code into Word and asked for a count). My most-cited paper (231 citations according to NASA, not that I'm keeping track) comes in under 4800 words (estimated in the same way, though I'm not sure that the file I used was the final draft), and five pages.
(By way of comparison, the law journal article that Kate published was close to 22,000 words, and took up 37 pages.)
One of the weird things about physics research is that the top research journals have strict page limits. Physical Review Letters is probably the top journal in physics, and has a hard limit of four (two-column) journal pages, including all figures, references, and notes. The only more prestigious journals around are Science and Nature, and their limits are similar (though they allow more slack for figures and references).
There are journals with less stringent length requirements, but they're mostly of a lower rank. My longest published article was in Physical Review A (still a very good journal, but a significant step down from PRL), and runs ten pages.
This means that physicists are in an odd position when it comes to publishing. In order to get into the very best journals, you need to go into the lab and work really hard to amass huge amounts of data, to absolutely and conclusively show that you have nailed down what you're talking about. And then you need to work even harder to make it all fit into four pages.
When I was in grad school, I published four articles in PRL, and the writing process was the same every time out (we referred to it as "paper torture"): One person would write a complete draft and distribute it to the other authors, and then we would have a meeting and rip the draft apart. And every time, we would end up suggesting changes just because they would save a few characters-- I don't think we ever got to the level of substituting "a" for "the," but it came close. (I tell my students that the first editing pass consisted of deleting all of the adjectives and adverbs, which isn't that far off.)
I don't know how generally this applies to other sciences. Science and Nature have the same limit for all disciplines, but I'm not sure what the page limits are like for the second-tier journals in chemistry or biology. I know from conversations with colleagues in engineering and social sciences that it doesn't extend there-- they were shocked to hear that we have a four-page limit.
Of course, having been trained to the four-page limit, the idea of a 35,000-word limit on an article just seems absurd to me. That's probably close to the length of my entire PhD thesis (I don't have a complete draft in an accessible electronic format, so I can't check), and that was more than five papers' worth of stuff. The idea of writing that much for a single article is a little bit scary-- I suspect I'd feel guilty for going on at that kind of length.
So, if you've ever asked a physicist a question and gotten a terse and opaque answer, now you know why. Or, alternately, if you've ever asked a physicist a question and gotten a thirty-minute lecture about the most minute mathematical details, now you know what they were reacting against...
Better You Than Me
Back in my grad school days, I went to the Centennial Meeting of the American Physical Society, in Atlanta. I gave an invited talk at that meeting, which went very well, and looks great on the ol' CV, but on the whole, I didn't enjoy the meeting that much. Partly, that was because I was in the middle of Thesis Hell at the time, but it was mostly that it was just way too big a meeting to take it all in.
After a day or so of trying to keep up with stuff from my own field, I decided to punt, and go to invited sessions from other disciplines instead. One of the talks I went to was an astrophysics talk on extrasolar planets. The speaker (whose name I didn't remember through the end of the talk, so you can't possibly expect me to know it now) described their measurement system in great detail, explaining how the light from a target star was sent into a spectrometer, which broke the spectrum up over several different bands on a CCD array. The presence of a planet was detected by measuring the minute shift in the positions of those lines due to the graviational tug of an unseen planet orbiting the star.
The really mind-boggling thing about that, to me, was the magnitude of the shift. From one extreme to the other, the lines changed position by something like a quarter of a pixel on their CCD. That's a group with some serious confidence in their curve fitting algorithms.
I was reminded of this a little while ago, when Sean Carroll posted an image of what might be the signature of a black hole forming. "Amazing what stories astronomers can spin from such sparse data," he wrote, and it's hard to disagree. More recently, Matt McIrvin offers a collection of Cassini links that make more or less the same point. The folks at JPL manage to wring an amazing amount of information out of a paltry handful of indirect experiments.
Those observations are solidly in the category of experiments that I'm glad to see done by somebody else. I'm incredibly impressed by what they manage to do, but I just don't think I'd have the patience. I'm happy to wait for other people to generate spiffy computer images of what they've discovered, and post them on the Web.
Of course, those sorts of experiments aren't confined to astronomy. One of my all-time favorites (I've written about it twice before, back in the early days of this blog) was presented again at DAMOP last week: the "Eot-Wash" torsion pendulum experiments at the University of Washington. These guys have taken the basic Cavendish experiment to measure the force of gravity, and pushed it to extremes I wouldn't've thought possible.
The talk at DAMOP reported some early results from their latest experiment, to look for deviations from Newtonian gravity at extremely short distances. It's an ingenious experiment-- they use a set of disks with holes in them as an attractor to perturb the motion of a torsion pendulum (basically, a mass suspended by a very fine wire, and allowed to twist in response to graviational forces). There's a thin disk with ten holes in it very close to the pendulum, and a thicker disk behind it with the holes offset from those in the top disk. The thickness and spacing of the disks is arranged so that the graviational attraction of the heavier bottom disc should exactly compensate for the missing mass of the holes in the top disc, and vice versa.
That is, the forces should cancel exactly, provided that the force of gravity obeys Newton's Law of Universal Gravitation. Which it does at macroscopic scales, but there are theoretical models that predict that gravity should get either stronger or weaker at very short distances (less than a hundred microns, or 0.1 mm). If it does, the Washington Group ought to be able to detect it from the change in the behavior of their pendulum.
It's a beautiful piece of work, and not just because it offers a chance to test the predictions of extremely abstract high-energy theories in a table-top experiment. The care that they put into the design and implementation of their apparatus is just amazing. I'm faintly in awe of their stuff, though again, I'm glad I'm not the one who has to do the experiments.
For the record, they showed some extremely preliminary data at DAMOP that showed a hint of a change right at the limits of their sensitivity. Those are very much what I think of as "Feynman points" (after a comment attributed to Feynman, that you should never trust the last couple of data points an experimentalist shows, because if they could've gotten more data, they would've. Cue Nathan with the anti-Feynman backlash...), but they're tooling up now to make an even more sensitive pendulum apparatus, to push the measurement to even shorter distances.
If the hints they showed hold up, things will get very interesting. Not just because it will actually provide some experimental data to constrain string theorists, but also because somebody's going to have to repeat the measurement, which will be a hell of a trick.
Whatever happens, I'll enjoy watching it. From a distance.
Revenge of the Sith in Six Words