Compare and Contrast, Charity Edition
Tuesday night, Jermaine O'Neal of the Indiana Pacers, desperate for some image resuscitation after slugging a fan in the big brawl in Detroit, pledged to donate $1,000 for every point he scored to tsunami relief efforts. He scored 32, but after the game bumped his contribution to $55,000, to match his season high of 55 pts.
Never a group to let PR opportunities pass, several other NBA players repeated O'Neal's $1,000 pledge last night. Tracy McGrady, Kobe Bryant, Bob Sura, Paul Gasol, Mike Miller and Jalen Rose each pledged $1,000. Bryant topped the scoring list with 27 points. Jalen Rose scored 21, but bumped his contribution to $44,000, to match his career high. To the best of my knowledge none of them have bumped up their contributions (if I'm wrong, or that changes, post a link in the comments). The NBA and the NBA Player's Association are each kicking in $500,000.
In another sport, Major League Baseball and the MLB Players Association are also donating a combined $1,000,000. The Yankees as an organization are donating $1,000,000 from the proceeds of their opening day game.
Hideki Matsui, one player on the Yankees, has donated $480,000, by himself. Quoting from the ESPN article:
Outfielder Hideki Matsui donated $480,000 to Fumio Hirata, head of the local chapter of the Japanese Red Cross Society and mayor of Neagari -- Matsui's hometown.
"Watching the news every day, I thought that this must be the worst disaster in history. I'd like to do what little I can to help," Hirata said Matsui told him.
The current athletic charity champion is Michael Schumacher, the Formula 1 driver, who's ponying up $10 million (out of an estimated income of $80 million-- who knew?).
The standard excuse for the US's piss-poor foreign aid contributions is that we make up for the lack of government funds through generous private charitable contributions. We report, you decide.
One Size Fits None
I realize that education policy is probably the only subject about which I have more trouble keeping a cool head than religion, but it's a Saturday, so the number of comments should be relatively small. And I can't really let this boggling Kevin Drum post slide by without saying anything.
Kevin's thinking about teacher compensation again, and wondering why teachers can't just be evaluated by their bosses, like people in white-collar industry. He thinks he's found one reason: there aren't enough school managers to do the necessary observation of teachers teaching.
Compare that to a typical elementary school, in which 20 or 30 teachers are managed by a single principal who sees them work at most for a few hours a year. How can a principal make any kind of reasonable evaluation based on that level of observation?
It's not possible, and that brings me to my observation: for all the talk about the efficiency of the private sector, I can't think of a private sector company that would allow itself to be as undermanaged as a typical public school. If my local elementary school were a part of IBM, it would probably have two or three first line managers, each managing a couple of grade levels, who would then report to the principal. These managers would spend their time actually managing: observing teachers, dealing with parents and the district office, mentoring new teachers, and evaluating performance.
I have a hard time coming up with a single response to this, because there are so many misconceptions in there. First of all, the public school hierarchy is not generally as flat as Kevin thinks-- even a relatively small school like the one I went to (~150 per graduating class) has at least a couple of vice-principals to handle some of the duties (usually discipline issues, from what I can tell). And even in small schools, there's a good deal of coordination among teachers at a given grade level or subject area, while larger schools are often broken into departments. It's not just 30 teachers with only one principal above them on the organizational chart.
More than that, though, I'm sort of at a loss as to what the extra managers would do to justify their (presumably higher) salaries year-round. I mean, yeah, they could do more classroom observing, but that's not a full-time job. And there's not actually that much "dealing with parents" that can be done by someone who isn't the classroom teacher for the children in question. Adding another layer doesn't seem like it would be all that helpful.
Really, there's just not that much for extra managers to do, though I'm sure if you created the positions, they'd find things to do to fill their time. In fact, I'd tend to say that a lot of the problems we have in public school education today are the direct result of people in education administration who don't have enough to do.
So there's the paradox: I don't think teachers are somehow immune from needing supervision, any more than any other white collar worker. But there's precious little of it available, and it would cost a fortune to provide it. Private sector firms seem to think that reasonable levels of management make them better companies, but public schools don't. Why?
Why do private sector firms think that? Good question. I've often wondered myself, but I suspect it has something to do with the fact that the people deciding how many managers there will be are themselves managers.
Well, ok, I suppose the question is really "why don't public schools have more layers of management, like private industry does?" I really don't have a less flippant answer than "Education is not industry."
This is a hard topic to speak sensibly about, because I've never worked in the business world, and never hope to, so I'm not even clear on what those extra managers are doing in the private sector. My impression, though, is that the reason for the extra levels of management is that it's needed to maintain coordination through the production of whatever is being produced.
If you're running a software company, for example, you need to make sure that whatever chunk of the product you're working on will work together with the chunks being handled by other people, that it will be ready when it's supposed to be ready, and that when it's all put together, it will do what the customer needs it to do. While that doesn't necessarily require hourly updates, but it's enough to keep track of on a daily basis that it's more efficient to have someone whose whole job is keeping track of where everything is. If you're doing something complicated enough, that turns into several someones, with other people keeping track of where they are.
Education doesn't work the same way, but if you want to make like Roger Waters and insist on a terrible analogy between education and industry, it doesn't work on the same time scale. The "product" in the case of education is, well, educated children, and the delivery date is years off. The various grade levels work on the "product" sequentially, and in well-defined units. There's no need to monitor the daily progress of the fifth-grade class and coordinate with the sixth-grade teachers. The fifth graders won't hit the sixth grade until next September, and the ones who haven't progressed far enough will have another go at the fifth grade. If there's some sort of large-scale failure requiring the entire sixth grade to re-adjust their curriculum, that can be handled during the summer months.
The "customers" in this bad model are either the parents or society as a whole. If you take the view of parents as customers, it's not necessary (or, at least, it shouldn't be necessary) for there to be a manager to coordinate between the customers and the production team, because the "customers" see the "product" every day. Put another way, if you want to know how your kids are doing in school, ask them, they're right there. Handling the exceptions is not a full-time job-- it can be done with a couple of meetings a year in most cases, and weekly or monthly contact in others.
If you want to consider society as a whole as the "customer," they really don't care about daily or even weekly updates on specific kids. Once a year is about good enough.
So, why is your local public school not run like IBM? Because it's not in the same business as IBM. Why should it be run like IBM?
The idea that every activity under the sun should be organized and evaluated in the same manner as major American corporations is really one of the most corrosive bits of foolishness to come down the pike in quite some time. Unless there's a good reason for it (and "merit pay would be much simpler" is a terrible reason), let businesses run themselves as businesses, and schools run themselves as schools. Schools aren't businesses, and trying to make them run like businesses will only get you lousy schools.
Ingenious Solutions to Unbelievable Problems
Along the same lines as the previous post, another fascinating thing about the LRT2004 workshop was getting to see the sort of weird technical problems that people face in the ultra-low radioactivity business. I'm not talking about things like power outages or lab floods, which are just stupid failures of infrastructure, but real problems associated with the work they're trying to do.
Whenever you start looking into the really gory details of a new experiment, you always discover that people have found really ingenious solutions to problems you never even realized existed. In my own work, I've recently learned that the ATTA group at Argonne dealt with the problem of scattered light by painting the inside of their vacuum chamber black. I was sort of dimly aware that scattered light had to be a major issue (they're trying to unambiguously detect single atom fluorescence, after all), but it never even occurred to me that it would be possible to get vacuum-compatible black paint. It turns out that NASA has needed to paint a number of satellites for one reason or another, so there's a company in Alabama selling vacuum-compatible paint (if the link doesn't go directly to the product page, be warned that they have the world's most annoying Flash intro). Who knew?
In the case of the LRT2004 workshop, the most fascinating solution I'd never thought anyone would ever need was "ancient lead" (warning: the English is a little dodgy, but that's the best explanation I found in a quick Google search).
The scientific question in this case has to do with low-level counting experiments, mostly for screening materials to be used in much larger instruments. When people do this, they're attempting to measure radioactive decay rates of something like a few microbecquerels per kilogram. That translates to something like ten or a hundred radioactive decays per year in each kilogram of material. That's a million times less radioactive than fresh outdoor air, and better than ten million times less radioactive than the radon-laden air in your basement.
If you're going to try to measure radioactivity at this ridiculously low level, you need to do something to ensure that you're not just picking up garbage counts from the environment. That is, you need to be sure that when your detector is looking at nothing at all, it picks up less than ten counts per year. You can do a lot with active methods, setting up concentric shells of detectors to identify particles coming in from outside and remove those from your signal. For active screening to even have a chance to work, though, you need to do something to knock down the background level, a lot.
That's why a lot of these labs are built under huge mountains, or down in the bottom of mines, where they can take advantage of the shielding effect of a kilometer or so of solid rock. Even that's not enough, though, as there's some radioactivity present in mines and tunnels (that pesky radon again-- there were a couple of talks about the lengths the Super-K people have to go to to get radon out of the air in their facility), and some of the gadgets you have in the lab will undoubtedly contain some trace radioactive elements.
And that's where the lead comes in. Even under a mountain, people have to hide their sensitive detectors inside big piles of lead bricks. But if you want really good sensitivity, not just any lead will do. You want old lead, the older the better.
You see, there's an isotope of lead, lead-210, that's radioactive, with a half-life of about 22 years. Modern lead all contains at least some lead-210, which is produced as part of the uranium decay chain, and gets all over everything because it comes after radon in the chain, and radon can spread pretty widely before it decays.
You can get rid of the lead-210, though, if you just wait long enough. After a few hundred years, the lead-210 is essentially all gone. Which is why ancient lead is much in demand in the ultra-low radioactivity community. And they're not kidding when they say "ancient," here-- the lead in question comes from ingots that were used as ballast in Roman galleys that sank two thousand years ago.
I don't think there was another single thing that encapsulates the level of ingenuity and single-minded dedication involved in these experiments quite as well as that: in order to achieve the kind of sensitivity they want, they have to dredge up lead bricks from the bottom of the ocean, and use them to shield their detectors.
Scale and Ambition
There were a number of little things that made it really clear to me that the LRT2004 workshop was drawing from a radically different population than I'm used to, but the one that really sums it up to me is that they didn't have long lists of names on their title slides (PowerPoint slides and PDF files from all speakers are available on the program page, though many of the files are gigantic).
Experimental physics is necessarily a collaborative enterprise, and pretty much every talk at an atomic physics meeting will feature a list of people to be acknowledged, either on the first slide, or the last. It's rare to see a talk that doesn't name five or six people, and most of the abstracts in a random session at DAMOP will include multiple authors. My talk for the LRT meeting lists something like a dozen people on the first slide (my collaborator at Yale, a couple of local colleagues who've been helpful, and a long list of undergraduate research students).
None of the other talks did that, and clod that I am, it took me a little while to figure out why. It's not because they were doing solo work-- quite the opposite. It's because the collaborations are so huge that there's no way to list everyone involved. The title slides all give the name of the speaker, an institution, and a mention of which gigantic project they're with, the biggest (apparently) being SNO, Borexino, XENON, and Super-Kamiokande. Each of those collaborations probably runs to hundreds of members, and some people belong to multiple collaborations. The scale of the whole enterprise is really daunting.
Of course, when you consider the projects they're working on, it sort of makes sense. If you've got a fast connection and time to kill, it's worth taking a look at some of the talks, particularly the overviews in the first session. These are gigantic detectors built under huge mountains (one European lab is called Gran Sasso, which my sketchy knowledge of Romance languages renders as "Big Rock"), and in the case of SNO, in a working mine. We're talking about people who can say a sentence like "Of course, there are significant technical challenges to maintaining a clean room environment underground, with intermittent blasting going on in other parts of the mine," and it's not meant as a joke.
The other really humbling thing about these projects is the time scale. In many cases, these groups are pouring huge amounts of time and energy and money into detectors that aren't expected to ever detect anything. They're million-dollar prototypes-- in many cases, the sensitivity of the devices now in operation isn't even close to good enough to actually discover any new physics. Their main purpose is really just to learn enough about building these sorts of detectors to be able to build the $10 million detector that they have planned. And in some cases, even those aren't expected to work.
The sort of dedication and long-term thinking that's required for this stuff is just mind-boggling. As I type this, I don't have a very clear plan for what I'm doing for dinner tomorrow night, and yet these groups have a research plan stretching five and ten years out, that's just laying the groundwork for later experiments. I can't really begin to conceive of what it must be like to do that sort of thing, but I'm sort of glad there are people who can.
Of course, sometimes they do get a little carried away-- the last Sunday talk presented a plan which involved putting 200 megacuries of tritium at the center of a novel sort of detector, to look for neutrino oscillations. To put this in perspective, one curie is, roughly speaking, the level of radioactivity required to give a Polish physicist terminal cancer (not really), and you won't find two hundred million times that much tritium outside of a nuclear weapons lab. And good luck getting them to give it to you-- somebody pointed out that CERN (I think) has something like 4 MCi of tritium, and that required a special dispensation from the IAEA and various national governments.
It's a completely daft idea (at least, that's how it looks to me). But then, you almost have to admire the kind of ambition that can conceive of a project on that scale.
Sensitive Dependence on Initial Conditions
I'm feeling guilty about the lack of science stuff here in recent weeks, so there'll be a series of short pieces about different aspects of the workshop I went to last month later this week. I had two lab sections to do before lunch today, though, so I'm too brain-fried to really deal with science.
So, instead, I'll talk about something that doesn't actually require any brain power: Why college football's "championship" system sucks.
Auburn beat Virginia Tech last night to finish the season at 13-0. Despite having beaten every team they played, they were shut out of the official BCS championship game (tonight, between USC and Oklahoma). The BCS rankings have Auburn at #3, which means they're completely out of luck as far as a title shot goes.
If you follow sports media at all, you've undoubtedly heard all sorts of explanations for this, mostly centering on their weak non-conference schedule. The real explanation is simple, though: Auburn lost any chance at the national title in August, when the pre-season polls came out. Both the AP and the coaches' poll had USC at #1 and Oklahoma at #2, while Auburn was at #17 in the AP, and #18 in the coaches' poll. Auburn's shot at a spot in the title game was pretty much gone right there, before they ever played a game.
Those two polls are a major component of the BCS rankings, and the simple fact is, if you don't lose a game, you don't drop in the polls. There is essentially no way for any team outside the pre-season top two to earn a spot in the championship game themselves-- the only way they can make it into one of the top two spots is if one of the pre-season top two teams loses a game. USC and Oklahoma both went undefeated, and you'll note that their end-of-season rankings are identical to their pre-season rankings. It's actually kind of impressive that Auburn even managed to get as high as #3 in the polls, given how low they started, but there is nothing they could do to get themselves any higher: they need one of the top two to lose.
It has nothing to do with the schedule, or the programs involved. If Auburn had started the season at #2, and Oklahoma at #18, the Sooners would be on the outside looking in. If USC had started the season at #3 behind Auburn, Kevin Drum would be complaining about having the Trojans left out. If the top two teams don't lose, there's no way for anyone else to get in.
This is really the essence of the problem with college football: the participants in the final game of the season depend on predictions made before the season even starts. And if you know anything at all about sports, you know that pre-season rankings almost always suck. There are always teams ranked highly in the pre-season who end up disappointing, and there are always teams overlooked in the pre-season who end up doing very well. This is true in all sports, at all levels: college football, college basketball, pro football (hey to the Chargers), pro basketball, baseball, whatever.
The difference is, college football is the only major sport that allows the terrible pre-season predictions to have any influence on its "championship." Every other sport has an end-of-season tournament, with seedings based on the results of actual games, and they determine a winner on the field of play. And that's why college football is a waste of time.
I taught my two classes (two hours of loud talking with a ten-minute break is a lot of work), and they went reasonably well, though I didn't get as much covered as I would've liked.
I met with one of my advisees, and cleared a large backlog of stuff that I was prevented from doing last week by the fact that all the administrative staff at the college were on vacation. I also made an appointment to meet with my thesis student tomorrow, to get him set up for the next stage of his project.
I waited until the end of the last class, and went into the classroom (which is the same room we use for lab), and made sure all the equipment was set up for tomorrow's 9:00 lab section.
Then I left, and trudged all the way up to the parking lot at the top of the hill. Which was not where I had parked my car.
Yep. Gonna be a long term.