This page will look much nicer in a browser that supports CSS, or with CSS turned on.

Uncertain Principles

Physics, Politics, Pop Culture

Friday, May 02, 2003

Excalibur 2.5.1

I didn't actually intend to make this a regular Friday feature, but it's a reasonably good frivolous weekend thing to post, so I'll do another mix tape post while I keep an eye on the electronically-submitted lab reports trickling in.

This is one of a pair of tapes I made to listen to while working on my PhD thesis. The title here comes from the LaTeX spell-checker I was using at the time-- I'm inordinately amused by the name. It's as if the Lady of the Lake, her arm clad in purest glittering samite, reached up out of the water and lobbed me a bug-fix, signifying that I, Arthur, was to rule Britain somewhat more efficiently, without the rare bug that caused saves to fail under system 6 while running over an AplleTalk network.

Side One

Side Two:

This is a bit more random than my usual, but I needed a broad range of songs for these tapes, as I was listening to them over, and over, and over while writing my thesis in the dead of night. The other one is even weirder, but that'll wait for another weekend.

Posted at 12:44 AM | link | follow-ups | 5 comments

Thursday, May 01, 2003

Living in the Future, Aleph-Nought in a Series

I was teaching a lab today, watching a one and a half dozen pre-meds using spring-loaded projectile launchers to fire steel ball bearings around the room, when I noticed a Chemistry professor ducking into the stockroom at the back of the classroom. I wandered over to see what he was after, and found him pawing through the shelves of E&M demo gear with another of the physics faculty.

"What're you looking for?" I asked, as if I have any idea where things are in the stockroom after only two years here.

"I've got some students cooking up a batch of high-Tc superconductor in lab, and I just realized we don't have any of those neodymium magnets to levitate. Do you guys have any we can borrow?"

Every now and then, I get smacked in the head with the fact that, flying cars or no, we're living in the future. I remember when this stuff first burst on the scene, and won a Nobel Prize. Now it's an undergraduate lab.

Posted at 11:11 PM | link | follow-ups | 1 comment

Wednesday, April 30, 2003

Cartesian Theater of Arrogance

I just got back from a talk on campus by Daniel Dennett, author of many a book on the issue of consciousness. Titled "Explaining the 'Magic' of Consciousness," the talk purported to be about, well, consciousness-- how it is that we know things, and perceive ourselves knowing things. Dennett, as near as I can tell (I've heard of, but haven't read, his books on the subject) is of the opinion that it's all mechanical-- he repeatedly described the phenomenon of consciousness as a "bag of tricks" that the brain plays as different non-conscious modules pass information back and forth to each other.

Unfortunately, the talk was that peculiar subspecies of public lecture in which it is assumed that the audience (composed of people associated in some way with a fairly highly regarded liberal arts college) is a pack of idiots. Aside from a quick sketch of two different models of how deja vu might work, the talk was notable for its real lack of anything resembling a model of how consciousness works. Sadly, it was not without frequent assertions of the existence of such a model-- a model developed by Dennett, and infinitely superior to any of the competing models (also not described) offered by his competitors in the fields of cognitive science and analytical philosophy.

Dennett opened the talk by describing the famous "Indian Rope Trick," an incredibly baroque bit of magic, which, as far as anyone can tell has never actually been performed. He went on to describe a number of possible ways to make people think that you had performed the trick, including drugging or hypnotizing the audience, generating special-effects video of the trick, and bribing a media figure to report that you had performed the trick. He used this as an analogy for what the brain does-- by using some sort of cheap trick to create the illusion that you are, in fact, conscious, the brain tricks you into believing that, well, you are conscious.

Of course, the analogy could apply to lots of other things as well, most of them having to do with politics. Moreover, in a slightly revised form, it could apply to his talk, as well:

Step 1) State that you will explain the "magic" of consciousness, and assemble an audience.

Step 2) Assert repeatedly that you have, in fact, explained the "magic" of consciousness, throwing in just enough jargon ("Cartesian theater" being the most common bit) to sound really smart.

Step 3) Send the audience home, believing that you have explained the "magic" of consciousness.

What a deeply frustrating lecture that was. The handful of optical illusions he showed were kind of cool, and I liked the explanation of the card trick "The Tuned Deck," which may be the greatest deception ever accomplished with a definite article (alas, Googling failed to turn up the story). But an explanation of consciousness it was not.

I half feel like I ought to look up one of his books, just to see if he actually does lay out a real model somewhere. Then again, his constant tone of self-promotion ("I wrote a little article that...," "In my book, I explain...," "He found this, which is just what I've been saying all along...") really set my teeth on edge, so these probably go in the "life is just too short" category. Anyone with a vastly more positive impression of his work is welcome to chime in in the comments and try to convince me to assign them a higher priority, though...

Posted at 10:39 PM | link | follow-ups | 8 comments

Tuesday, April 29, 2003

Typical Data Massage

Some years back, there was an ad in Physics Today that featured a picture of two white-coated scientists holding up graphs of identical data (a sort of broad peak-with-a-lump-on-one-side). One of the graphs showed the data being fit by two gaussian-type peaks (the classic "bell curve"), while the other had three lines, with a myriad of other parameters marked off. The caption read "Dr. Smith found two peaks. Dr. Jones found three. Guess who got the grant?"

Someone clipped this out and tacked it up in the coffee room of my old group at NIST. We found it hilarious, as our answer to the question would've been "Dr. Smith." The broadly grinning Dr. Jones, who had used the fitting program being advertised, was apparently using an eleven-parameter fit (three peaks with height, width, and position, plus a flat background, plus a sloping background), and by the time you get to eleven fitting parameters for such a simple set of data, none of us would be willing to trust those numbers. Dr. Smith would've gotten the grant, had it been ours to give, because it looked like Dr. Jones was just making stuff up.

I had occasion to use that very software package recently (a colleague here bought it), as one of my students was working on fitting some spectra, and I was boggled by just how accurate that ad turned out to be. When the raw data showing a half-dozen clear gaussian peaks were fed into the program, it returned a fit with better than thirty peaks fit to the data-- a separate gaussian peak for each and every silly little noise wiggle in the data. The fit looked beautiful-- it hit every single point in the dataset-- but was utter garbage. Seeing that ad led me to believe that the program in question would be powerful but untrustworthy, and using the program confirmed my immediate opinion.

Of course, we wound up using it anyway, but did a little smoothing of the data first-- averaging groups of three or four neighboring points to remove a little of the noise that was messing up the fit routine. There's probably also a way to restrict the initial fit parameters in a way that would eliminate the spurious peaks from the fit, but smoothing the data was the easier option, and something that we would've done anyway. The student, not surprisingly, was a little uneasy about this, but the minimal smoothing we did is more or less standard practice.

Figuring out just where to draw the line on how much you can massage your data before it becomes worthless is one of the hardest parts of experimental science training. Data analysis and presentation are more art than science, and a Black Art at that. A running joke in the sciences is to note that the phrase "typical data are shown in Fig. 1" is code for "Fig. 1 presents the very best data we ever got. Ever." You only ever show the best data you end up with, and most of what goes into the data analysis usually ends up being substantially uglier than what gets printed in the pretty pictures.

But there are limits to how much data massaging you can reasonably do to a sample-- they're fuzzy and ill-defined limits, and exist in a sort of "I know it when I see it" state, but there are limits. Partly, this is a matter of the destructive effects data massaging always has on the actual information content of a data set-- when you smooth peaks to remove noise, you also make tiny changes to the peak height and width-- too much smoothing, and you obscure anything you might've hoped to learn from those values. But this is also a matter of honesty-- hell, Leo, I ain't afraid to say it, it's a question of ethics. It's incredibly easy to lie with statistics, or even just mislead people with heavily processed data. Publication is all about selling your results, but it's important not to over-sell them.

The ultimate decision on what to present is always something of a judgement call, but there are a few basic principles that I always try to adhere to:

1) Always Present Raw Data. You can do a little smoothing, or some extra averaging, and you can cherry-pick out the very best data set to put on your slides, or in the paper, but you should always show at least one figure of raw data. If your analysis starts with time-series data, you should show the time series. If you start with two-dimensional images, you should show one of the pictures.

Under no circumstances is it ever acceptable to produce a paper or give a talk in which only fits to data are shown (hey to John Lott). This is not to say that everybody in physics, or even my corner of physics, holds to this-- there's a guy in atomic physics who is locally infamous for generating three-dimensional plots of image data through a process that essentially amounts to passing off theory curves as experimental data (I won't name names, but Nathan may know who I mean)-- but it's one of the few rules that's more observed than breached.

2) Avoid Log Plots. Sometimes there's no way around putting a log-log or semi-log plot into a talk-- if your data legitimately span several orders of magnitude, a logarithmic plot (where ticks on the axis mark orders of magnitude (0.1,1,10, 100), not single digits (0, 1, 2, 3)) is the only way to go. Too often, though, log-log plots can be a tool of deception-- the way these work, they tend to compress data in a way that makes the noise seem smaller, and very few people have a good enough feel for what's going on with a log plot to be able to see through this. If you read a paper, and you see a semi-log plot where the scale ranges from 0.1 to 5, odds are, it's being used to obscure shaky data.

3) Linear Scales Include Zero. Again, there are times when you have to break this rule, but whenever possible, a linear scale should include zero. One of the easiest ways to inflate the size of an effect is to plot it on a graph whose axes run from 1000 to 1002. The data will swoop dramatically from one extreme of the scale to the other, impressing the casual reader to no end, but the actual effect is completely insignificant.

If you're stuck with data that really do range between 1000 and 1002, and the effect is, in fact, a significant one, you're better off re-casting the graph in terms of a difference from some point, and putting zero on scale, or normalizing the data in some way to make things clearer. If you see a graph that seems to show a dramatic change in something, always check the axes to make sure that it's not just a tiny wiggle atop a huge and unchanging background.

4) Binning Beats Smoothing. If you've got a long file full of noisy data, it's very tempting to just run the standard "smooth" algorithms on it-- replacing each point with the average of its value, and the values of a couple of points on either side. This is trivial to do on a computer, and often does a dramatic job of improving the look of a graph.

Used too much, though, this can easily become a tool of deception-- you end up presenting a graph with 100 points all on a smooth curve, giving the reader the impression that each and every point of data fell exactly on that curve from the very start. In fact, though, each of those points is really an average over some larger range of data. You're saying, in effect, that the value at 1.0003 seconds was x, when in actuality, the point you've plotted at 1.0003 seconds is the average of all the points between 1.0000 and 1.0006, while the point at 1.0004 is an average from 1.0001 to 1.0007, and so on.

For small amounts of smoothing, it's not worth the added hassle to clear this up, but if you get above about five points smoothed (the point you start with, and two to either side), it's better to start binning the data. Instead of plotting 1000 points, where each point is the average of five values, plot 200, and leave out the deceptive points in between.

5) Beware of "Feynman Points". The name is a reference to a comment attributed to Richard Feynman, saying basically that he would never trust the last two points on an experimental graph, because if the people taking data could've gone beyond that, they would've.

This isn't always true-- I've published a couple of papers where the data cuts off basically because we got bored, and had already made our point-- but in general, it's a good rule to keep in mind. The last couple of points on a graph are usually where the apparatus is working at its absolute limits, and that's where you're most likely to get flaky results. If you're presenting data, you should always avoid trying to draw grand conclusions from the last couple of data points. Yeah, you know that it's a good experiment, and the apparatus works, and you're doing everything right, but you also know where you're pushing the limits.

And if you see somebody presenting data and drawing dramatic conclusions based on the last couple of data points, take those conclusions with a bucket of salt.

Posted at 10:42 AM | link | follow-ups | 1 comment

Monday, April 28, 2003

Cue Steve Earle

The other problem I have with Kevin Drum's suggestion for teaching history (scroll down for the first problem) is that I think it might fall even further than we already do into the trap of viewing history as a determining factor in everything.

This is something that nags at me whenever talking heads start going on about the historical causes of modern events. It's remarkably easy to fall into saying "Well, the people in that part of the world have just been at each other's throats for centuries-- what're you going to do?" This is particularly popular when discussing the history of the Balkans, or the Middle East, and another popular variant involve the attribution of Chinese Communist Party paranoia and xenophobia to Imperial policies of a millennium or so ago.

While there's often a glimmer of a point to these statements, I think they're wrong more than they're right. History isn't deterministic, and nothing shows this more clearly than the history of Western Europe. By the logic applied to the Middle East, one would expect France and the UK to be at each other's throats. After all, the history of the two nations is one unending stream of bloody war, from the Norman Conquest down to the day of Napoleon. And yet, they peacefully coexist today, and are actually pretty friendly, as such things go.

Or take a look at France and Germany-- they fought on opposite sides of the two bloodiest wars in human history, and have their share of nasty squabbles going back centuries before that. By Balkan logic, the Maginot Line fortifications should be updated yearly, and yet, France and Germany are at least as close as France and the UK. They joined together to torpedo Iraq resolutions in the UN, after all.

History isn't deterministic. The fact that two groups of people have been at war for centuries doesn't mean they'll always be at war.

History isn't even necessarily relevant. Throw a rock in the air these days, and you'll hit a blogger who can tell you how the roots of strife in the Middle East all trace back to the dissolution of the Ottoman Empire, or maybe even the Crusades. But, really, outside of a few comfortable Westerners pontificating about the grand scheme of things, and the odd messianic whack job, these really aren't the factors driving the conflict. What keeps the region simmering is the fact that it's divided between repressive medieval theocracies, repressive but geopolitically convenient secular despots, and Israel. Were it not for the propaganda skills of most of the theocrats and despots at using history to redirect public rage away from themselves, I doubt we'd be talking about the Ottomans, let alone the Crusades.

That's the problem I see with history-in-reverse. In order to be comprehensive, you'd need to trace the links to current events back as far as possible. The Middle East gets you to the dissolution of the Ottomans, which gets you to the rise of the Ottomans and the fall of Byzantium, which gets you to the Crusades, and all the way back to Mohammed and the rise of Islam. Which can't help but reinforce the idea that there's some sort of direct line between Pope Urban II and the most recent Palestinian to stuff his undershorts with Semtex and go to the mall.

But really, whatever connection exists between the Crusades and the current disputes over Israel is a tenuous one at best. While it may feel very illuminating to trace the origins of Hamas back into the mists of history, it's doubtful that the actual grievances of suicide bombers stretch any farther back than 1949, and most likely, they don't go back much past the last rocket attack on the West Bank or Gaza. Tracing the origins of modern problems back into history leads rather too easily into burying the causes in history, which is foolish.

We can't do anything about history, but we can hope to do something about the immediate concerns of the Palestinians, the Kurds, the Iraqi Shiites, and whatever other groups you might care to name. And if we can do that, there's hope that the Ottomans and the Crusades might one day be as remote and non-threatening as the Norman Conquest and the Franco-Prussian War.

Posted at 10:26 PM | link | follow-ups | no comments

You're Not Feynman

Kevin "Calpundit" Drum has a neat suggestion for teaching history:

It's hard for kids to get interested in century old debates without knowing all the context around them, but they might very well be interested in current day events. So why not start now and explain the events that got us here? War on terrorism? Sure, let's teach it, and that leads us backward to a discussion of how the current state of affairs is the successor to the bipolar world that came apart in 1989. And that leads back to the Cold War, and that leads back to World War II, etc.

In other words, invert cause and effect. Try to get them wondering about the causes of things they already know about, and then use this curiosity to lead them inexorably backward through history.

It's an intriguing idea, and I could imagine it working well for some people. It's essentially that impulse, after all, that led to bookstore display tables chock full of books on the history of Afghanistan.

I'm not entirely convinced (as I said in Kevin's comments section) that it would really work, though, in part because of what I think of as the "Feynman Lectures problem."

For those not hip enough to recognize the reference, the Feynman Lectures on Physics are a transcription and extension of the lectures Richard Feynman gave in an intro physics course at Caltech many years ago. Feynman is famous as an explainer of physics, and these lectures are part of the reason why-- they're wonderfully readable, and contain lots of fascinating little digressions about topics that are as much philosophy as physics.

Of course, legend also has it that by the end of the course, there were more tenured professors sitting in on the lectures than there were students in attendance. The Lectures are wonderfully illuminating if you already know a bit about the subject, but somewhat deceptive if you don't. As a former professor in grad school put it, "You read Feynman, and you say, 'Yes! I understand this! I'm doing physics!' Then you try to do a problem, and you realize that, well, you're not Feynman."

I worry that a similar problem might afflict Kevin's imagined reverse-history course. The approach he envisions sounds very appealing to me, but then I enjoy this sort of thing, and already know a bit about the subject. The technique Kevin imagines would be great, as it would amount to basically just filling in the gaps in stuff I already know, and making connections between topics that seemed unrelated on my first encounter with them.

I'm not so sure it would work all that well on people who don't already have a hazy mental picture of what history looks like, though. Trying to learn wholly new material in bulk, and also keep track of the chain of connections leading back up to the present, might be a bit too much for intro students to handle. A little context is good, but too much context can just swamp a novice. I could see this working as a sort of a "Capstone" course, wrapping up and tying together a sequence of earlier history courses, and demonstrating the relevance of the material, but I'm not sure I believe that it would really work to improve the opinions most students have of the subject.

In a way, this is the flip side of my earlier comments about narrative in teaching. It's a recurring concern I have when I put lectures together (and a complaint that crops up on course comment sheets from time to time). I worry that an order of topics that seems intuitive and interesting to me won't actually get through to the students very well-- that, in the end, I end up burying them in context and asides, and leaving them unable to solve problems.

Of course, this then comes back to one of the philosophical crises of college teaching: whether it's better to try to spark the interest of those intro students who might go on to major in a subject, and risk confusing those who were never potential majors; or whether you should just pound away at what they need to know to get through the course, and trust that the interested will find it interesting enough to continue, while those who are taking the class just to get a requirement out of the way will go away happy. It's a question I've struggled with a lot, and I've yet to strike a really satisfactory balance between the two.

Posted at 9:49 PM | link | follow-ups | no comments

Sunday, April 27, 2003

Cat Vacuuming

I've shifted the rest of my Web material from to (I hope), and updated the blog template to reflect that fact. I also added a link to my stories about living in Japan, as long as I'm merging all my web content together. Feel free to poke around, and let me know if you see any broken links and whatnot.

Also, if you link to any of my other pages, please update your links (and check through the new site, as the directory structure has been changed). The site will be going away Real Soon Now.

Posted at 9:38 AM | link | follow-ups | no comments

ΔxΔp ≥ h / 4 π

My stuff
What's with the name?
Who is this clown?
Does he know what he's talking about?
Archived Posts
Index of Physics Posts
RSS, version 0.91
The Library of Babel
Japan Stories

Δ E Δ t ≥ h / 4 π

Other People's Stuff

AKMA's Random Thoughts
Arcane Gazebo
Arts and Letters Daily
Boing Boing
Chronicles of Dr. Crazy
Confessions of a Community College Dean
Cosmic Variance
Crooked Timber
Brad DeLong
Diary de la Vex
Drink at Work
Easily Distracted
Electron Blue
John Fleck
Grim Amusements
David Harris's Science and Literature Hellblazer
In the Pipeline
Invisible Adjunct
Izzle Pfaff
Knowing and Doing
The Last Nail
Learning Curves
The Little Professor
Making Light
Malice Aforethought
Chris C. Mooney
Musical Perceptions
My Heart's in Accra
Michael Nielsen
Not Even Wrong
Notional Slurry
Off the Kuff
One Man's Opinion
Orange Quark
The Panda's Thumb
Perverse Access Memory
Political Animal
The Poor Man
Preposterous Universe
Pub Sociology
Quantum Pontiff
Real Climate
The Reality-Based Community
SciTech Daily
Sensei and Sensibility
Talking Points Memo
Through the Looking Glass
Unmistakable Marks
Unqualified Offerings
View From the Corner of the Room
What's New
Whiskey Bar
Wolverine Tom
Word Munger
Yes, YelloCello
Matthew Yglesias

Book Stuff

Book Slut
Neil Gaiman
The Humblest Blog on the Net
Pam Korda
Outside of a Dog
Reading Notes
Seven Things Lately
The Tufted Shoot
Virtual Marginalia
Weasel Words
Woodge's Book Report


ACC Hoops
College Basketball (2.0)
Dave Sez
Hoop Time 3.0
The Mid-Majority
Set Shot
Tuesday Morning Quarterback

Δ N Δ Φ ≥ 1 / 2


75 or Less Album Reviews
Rotten Tomatoes
The Onion A.V. Club

Geek Stuff

Annals of Improbable Research
Astronomy Picture of the Day
Britney Spears's Guide to Semiconductor Physics
The Comic Book Periodic Table
MC Hawking's Crib
The Museum of Unworkable Devices
Myths and Mysteries of Science
The Onion
Physics 2000
Sluggy Freelance
Web Elements
Physics Central (APS)
This Week's Finds in Mathematical Physics

Useful Stuff

Web Design Group

While it is my fervent hope that my employers agree with me about the laws of physics, all opinions expressed here are mine, and mine alone. Don't hold my politics against them.

Weblog posts are copyright 2003 by Chad Orzel, but may be copied and distributed (and linked to) freely, with the correct attribution. But you knew that already.

If you use Internet Explorer, and the text to the right cuts off abruptly at the end of this column, hit "F11" twice, and you should get the rest of it. We apologize for the inconvenience.

Powered by Blogger Pro and BlogKomm.

Steelypips main page.