Better Living Through Chemistry
Of course, even if I can't think of structural reforms that can improve the lives of graduate students and post-docs, it's good to know that there are dedicated scienticians out there working on ways to improve their quality of life:
Our theory is that a simple brita water filter can be used to make bad vodka, into good. In our case this meant turning a Vladimir™, into a Ketel One™. At $11.09 for 1.75 liter (Ketel is 11.99 for the 350 ml), Vladimir is a steal. It is, however, painful to drink, has a repugnant aftertaste, posesses a bouquet reminiscent of rubbing alcohol. Our working theory was that these terrible qualities were caused by a lack of proper filtration, and that running our Vlad through a charcoal filter would remove some of the impurities causing these odors and flavors.
So, you know, you may have to work long hours, for lousy wages, but at least you can drink good-tasting vodka...
Department of Unsurprising Results
In response to the Jan Hendrik Schoen debacle, the American Physical Society commissioned a study of professional ethics, the results of which are written up in this month's Physics Today. They sent surveys to "undergraduates who are members of both APS and the Society of Physics Students (SPS), junior members of APS, a number of large experimental collaborations, physics department chairs, and leaders of APS divisions, forums, sections, and topical groups," asking a number of questions about various kinds of professional misconduct, and ethics instruction.
The main result of this survey shouldn't really come as a surprise to anyone, but seems to have: young physicists are concerned about career issues:
By far the highest response rate and the most extensive and heart-felt answers to the open-ended survey questions came from the junior members of APS-- ?that is, physicists within the first three years after getting the PhD. Clearly, issues of ethics and professional conduct find strong resonance in that group of young physicists. Here is a small sample of their many responses:
The only real answer to the ethics problem is for tenure review boards to stop rewarding the Science/Nature/PRL culture above all else.
Our scientific community promotes the search of the surface and superficiality [to the] detriment of content and deepness.
Many breaches of ethics arise from the pressure to publish . . .
The researcher . . . will be judged [by] the number of articles, and the corresponding journal names, appearing on the CV. He or she will not be judged [by] the work spent on each paper, how many backup checks were performed to confirm the results, and so on. High number of papers, in highly ranked journals, is what builds a career. . . . The recent sad events [show] that it is for many people more important to publish spectacular results than to publish true results.
The junior members' concerns over careerism and other issues are echoed again and again in response to the survey question, "What do you see as the most serious professional ethics issues which could/should be addressed by APS?"
Particular emphasis was given to the treatment of students and post-docs:
Until recently, APS ethics statements had focused mainly on issues related to publication of scholarly work, authorship, and refereeing practices. But a clear majority of the junior members responding to the survey feel that APS ethics statements should be broadened to include treatment of subordinates, especially graduate students and postdocs. Many of their open-ended responses described the unethical treatment of subordinates in research as a very serious problem:
abuse of graduate students by advisers.
slavery of graduate students. Professors threaten to not write letters of recommendation unless graduate students stay in their group to produce more data.
Too often students are treated as labor instead of [as] students and progress towards finishing [their degree] relegated to secondary importance.
Truthfully, graduate school's purpose is to provide cheap, talented labor to get science done cheaply.
Treatment of 'subordinates' is appalling-- ?students and postdocs are merely vehicles for publication. There are no checks on abuse?and reporting of any abuse usually results in the end of a subordinate's career-- ?even if the complaint is correct and justified.
Junior members expressed concerns over not giving students credit for research by leaving their names off published papers. They also wrote of supervisors imposing grueling hours on their graduate students and sometimes pressuring them to do unethical things such as overlooking data that did not conform to expectations.
(Apologies for the lengthy quotes-- I'm not sure whether this will end up behind a paywall at some point, and want to provide enough context for this to make sense.)
I feel like I ought to say something about this, but it's kind of hard to come up with anything coherent. I can't say I've personally witnessed serious grad student abuse, beyond what's inherent in the job ("Grad School: the hours suck, but at least the pay is bad!"), but I believe it exists. And I can easily believe that some things I shrug off would be deeply traumatic to other people.
In the end, the survey respondants have some valid points: the treatment of grad students and postdocs is, in general, not very good. It's way better than what's inflicted on medical students (which I think borders on criminal), but things could be much better.
And I think it's true that this traces back to the tenure system, or, rather, the tenure system combined with the tightening of the labor market. With old professors stubbornly refusing to die and/or retire, there are many more Ph.D.'s out there than there are jobs available for them, and the requirements for tenure have become more stringent as a result. It's a lousy labor market, which puts terrible stress on those lucky enough to land jobs, and some of that pressure is inevitably passed on to their subordinates. The physics community is full of stories of professors who were turned into raving lunatics by the pressure to publish results in order to make a tenure case. I'm trying not to become one, but there are days when it's tough.
The problem is, I don't know what to do about any of this. Which makes it hard to write anything useful on the topic. Hence this lame post. Suggestions and lurid stories are welcome in the comments, though.
I have a colleague who reads the Chronicle of Higher Education regularly, and who forwards articles that strike him as interesting to the entire faculty. Some weeks, this amounts to fully half of the published text of the Chronicle, so it's not really that useful a filter, but every now and then they have something fairly worthwhile.
This week, it's an article on teaching technology (limited-time free link from the email forward-- read it quickly, before it goes behind the paywall). Specifically, it deals with complaints about ineffective use of PowerPoint, Blackboard, and other educational technologies:
Alison Lesht, a senior at Connecticut College, dreaded going to her organic-chemistry classes, held in one the college's wired classrooms.
It wasn't that the material was dense and challenging. It was because her professor "would write on the PowerPoint slides complete sentences, which she would then read," explains Ms. Lesht, who is majoring in biology and minoring in religious studies. "It didn't really add anything to the lecture. It just made everything more complicated and convoluted."
"I call it 'PowerPoint abuse,'" she says. "It's pretty widespread."
There's definitely something to this, or at least to the idea that it's perceived as a problem. I've had a colleague tell me that no junior faculty member should ever lecture with PowerPoint, because it invariably produces student complaints which will adversely affect tenure prospects. And he's right, in that when I do teach with PowerPoint, I always get one or two comment-sheet complaints at the end of the class-- of course, they're usually offset by a roughly equal number of positive comments, so I personally feel it's a wash.
And it's certainly true that there are some dreadful misuses of PowerPoint out there. I've seen some really astonishingly bad PowerPoint talks, and I have no doubt that the same people would give very bad lectures off PowerPoint.
But the specific problems that they cite aren't really problems inherent in PowerPoint. The complete-sentences-on-slides thing is something that I've seen done with overhead transparencies as well, and in one memorable instance, with Microsoft Word. You don't see it with chalk, because it takes too long to write out complete sentences on a chalkboard, but I've certainly sat in classes that were the spiritual ancestor of the "just read the slides" school of lecturing, where the professor pretty much read his or her notes without much deviation.
In fact, I tend to think that the focus on technology is obscuring the fact that "PowerPoint abuse" is just a special case of a larger problem: bad lecturing. Or, to turn it around a bit, I would say that "Chalkboard Abuse" is (or at least was) just as rampant as its more technological cousin.
Think about it: how many times have you sat in a class where the hardest part about the lectures was deciphering the professor's illegible scrawl? I took a class on electricity and magnetism once that was taught by a fellow who had the ability to make pretty much any Greek letter look like a "q." This is just deadly in E&M, which not only boasts a large number of Greek letters, but also a large number of q's-- if you weren't paying attention as he wrote each symbol on the board, it was just hopeless. His abominable handwriting is partly to blame for the fact that I never really understood E&M until grad school.
And how many times have you sat in a class where the professor just droned on in a monotone, reciting well-practiced lecture notes, and only occasionally pausing to write something down on the board? Too damn many times, that's how many-- the most memorable of which was the QM class taught by a Chinese woman with a thick accent, who essentially spent the semester reading the textbook to us. The explanations she gave were the explanations in the book. The examples she worked out were the examples worked out in the book. The equations she wrote on the board were the equations that were written out in the book. She told two jokes all semester: both were in the textbook.
I could go on-- classes that ground to a stop to allow the painstaking construction of some complicated figure; other classes rendered incomprehensible by the professor's total lack of artistic ability-- but you get the idea. For every bad lecture in PowerPoint, there's a bad lecture done with chalk. The problem here is real, but not new.
It is true that PowerPoint opens up some new failure modes (it's easy to go too fast, and if the slides are made available to students, many of them will be inclined to skip class), but it also closes off some old ones (bad handwriting and poorly drawn diagrams). And there are ways to deal with the new problems-- in particular, by leaving blank spaces in the slides that get filled in during lecture. And, to be fair, the article is pretty clear that the problem lies with the faculty who don't know how to use technology well, not the technology itself.
But again, the real problem isn't just a matter of technology enabling bad lectures, the problem is that too many faculty are predisposed to giving bad lectures in the first place. Bad lectures with PowerPoint just stand out more because of the relative novelty of the technology, while bad lectures using a chalkboard simply fade into the background, as chalk is still the default instructional technology. In another ten years, the situation may well be reversed (or we may be reading articles about how most professors do a really terrible job of using immersive virtual reality technology in their lectures, and would do better with PowerPoint).
The idea of providing more training to faculty in how to use technology effectively isn't a bad one (having been to one or two technology training workshops, I hesitate to call it a good idea). A better plan would be to provide future faculty with some better instruction in how to prepare and deliver a lecture effectively, in whatever medium.
That Voodoo That You Do
Brad is talking about something that's central to the idea of Social Security privatization. For some reason that no one really understands (though not for lack of trying), investment returns in the stock market are substantially higher than anyone thinks they rationally ought to be. So why not take advantage of it?
See, it's things like this that make me skeptical of the entire enterprise of economics.
On the policy side, if you don't know why the return is higher than you expect, it doesn't strike me as a great plan to just assume that the return will continue to be high, and budget accordingly. If you don't understand why it's higher than expected, you don't really know that it won't suddenly return to the rational expectation level, or even drop substantially below the level you expect. You probably won't understand that, either, but it's not like you'll be doing any worse than you are now. Unless you had money invested...
As a matter of policy, I'm really not comfortable with the idea of investing billions of dollars in a theory that's pretty clearly got problems. See also "missile defense."
(I also seem to recall a recent episode in stock market history in which prices and returns were higher than anyone rationally expected, and a certain set of economists managed to convince themselves that this would continue forever. That didn't work out all that well (though stock prices are probably still higher than they ought to be, and there still isn't a particularly good explanation).)
Of course, there's also the theory side, where this serves as yet another ignored piece of evidence. Stock market returns are higher than anyone rationally expects, and yet people continue to natter on endlessly about how wonderfully rational markets are, and how everything should be market-based. Given that the theories manifestly aren't doing a very good job of predicting the behavior of the markets we already have, why should anyone have any faith in the predictions for markets that don't exist yet?
(I recently heard an economist defending the idea of the "terrorist futures market" that was floated a while back, in which the government would let people trade "shares" in different sorts of terror attacks, and use the prices to predict the next attack. "Markets are a wonderful tool for aggregating information," he said in defense of the plan. But this relies on the assumption that somebody in the market has information to aggregate, which isn't at all clear. As I understand the idea, it doesn't strike me as likely to be more successful than hiring psychics (unless you let the terrorists invest, which is just perverse), but it has some inexplicable fascination.)
I have a number of colleagues and acquaintances in economics, who individually strike me as very smart people (and I have the utmost respect for the work they do with their students). But their discipline sometimes seems like nothing more than the product of collective insanity.
Journal Club 4 (Happy Thoughts)
I got sort of caught up in the election for a while there, which meant that my recently re-started journal reading was put on hold for a bit. On the bright side, though, this gives me a wide range of stuff to choose from for this edition of Journal Club. As I'm still trying to think Happy Thoughts, I'm only picking papers that I'm happy about for one reason or another.
Now, it may not be obvious to the average reader that there's anything to be happy about in a paper with a title like "Quantum Cloning of a Coherent Light State into an Atomic Quantum Memory," unless it's that they only used "Quantum" twice. But there are, in fact, two things about this that stand out to me: first, a former professor of mine was one of the authors of the famous "Quantum No-Cloning Theorem," which says that you're not allowed to make exact copies of quantum states without destroying the originals, so it's a fairly audacious title. Beyond that, though, this is the first paper I've ever read about quantum cryptography in which the authors take the side of the eavesdropper.
Quantum cryptography is a technique which uses the weird properties of quantum entanglement to guarantee the security of key distribution. Two people, traditionally called "Alice" and "Bob," who wish to communicate without their nefarious enemy Eve listening in will exchange entangled pairs of particles (generally photons), and generate a mathematical key to encode their message by making a series of measurements on those particles. The results of the measurements will be correlated in a certain way, and based on that correlation, Alice and Bob can generate a key that they both share, which Eve cannot access. Because the key generation depends on the quantum correlations between Alice's and Bob's particles, Eve can't even steal the key by intercepting one of the particles, and measuring it herself-- her measurements will upset the correlations, and Alice and Bob can detect that.
Quantum key distribution has been demonstrated with single photons, but they can be sort of difficult to deal with, so there are a number of schemes out there for using "continuous variables," which basically means "large numbers of photons." These photons are in what are called "coherent states" (which means that the number of photons and the phase of the light field are both uncertain, in a very particular way that satisfies the third Heisenberg relation in the links bar on the left). It turns out that it is possible to make more-or-less exact copies of coherent states, and that's what this paper is demonstrating. While this is sort of useful for Alice and Bob (who can thus store backup copies of their entangled states), the main impact would seem to be that it allows Eve to make copies and listen in undetected (more or less). I'm happy to see someone taking her side, for once.
From the same issue of PRL, we have this week's winner of the coveted Best Title Award, "Optimal Swimming at Low Reynolds Numbers." I'm not really entirely clear what it means, but it sounds cool, and that's a happy thing...
(OK, the basic deal seems to be that the authors considering the locomotion of microscopic critters like amoebae, who "swim" by changing shapes. They've developed a scheme that allows them to determine the most efficient set of "strokes" (the optimal sequence of body shapes) for such a creature to use in swimming through a viscous liquid. There's an animated GIF of the winner online, and if that's not a happy thing, I don't know what is...)
Next up are two papers from Phys. Rev. A that fall into the category of "Technical Tour de Force," aka "I'm happy somebody else did this. The first, "Role of wire imperfections in micromagnetic traps for atoms," is from a French group that's famous for its ability to calculate damn near anything from first principles. In this case, they're looking at the current hot topic of "atom chips," where small clouds of atoms are trapped in the magnetic fields generated by currents running through tiny wires manufactured on silicon chips. They're all the rage these days (hey to Jeff McGuirk).
A recurring problem with these systems is that the magnetic field isn't really perfectly smooth, but contains lots of little ripples which sort of mess up the atom clouds. The source of these ripples is a little difficult to trace, but this paper shows fairly convincingly that it has to do with the roughness of the wire surfaces (not a huge surprise). The paper's worth a look just for Figure 2, which shows the field profiles measured using atoms clouds trapped at various heights above the wire, and calculated after using a scanning electron microscope to look at the entire length of the wire (which is something like 50 by 2800 microns, and about 14 microns high). The agreement between measured and calculated potentials is absurdly good, but I'd expect nothing less from the Orsay group.
The other PRA entry for this week is "Lifetimes of the 9s and 8p levels of atomic francium," by some guys I know at SUNY-Stony Brook. Lifetime measurements in and of themselves aren't all that thrilling, but this one is notable because francium has no stable isotopes, and the total amount of francium in existence on Earth at any given moment is measured in grams, and not many of them. Their sample is created by slamming oxygen atoms into gold foil at high speed, and then they manage to laser cool a fraction of the atoms produced. At one point, the radioactivity from the accelerator was so high that they couldn't even be in the room when the experiment was running (I think they upgraded the system before this set of experiments, though).
Lifetime measurements are still complicated and kind of dull, but just the fact that they manage to see these states at all is unbelievably cool.
Finally, the obligatory paper-from-another-field, this week from Nature: "Unusual activity of the Sun during recent decades compared to the previous 11,000 years" (no link, because they're dicks at Nature, and it's really difficult to get access). That's an eye-catcher, and the first paragraph lays it out clearly:
According to our reconstruction, the level of solar activity during the past 70 years is exceptional, and the previous period of equally high activity occurred more than 8,000 years ago. We find that during the past 11,400 years, the Sun spent only on the order of 10% of the time at a similarly high level of magnetic activity and almost all of the earlier high-activity periods were shorter than the present episode. Although the rarity of the current episode of high average sunspot numbers may indicate that the Sun has contributed to the unusual climate change during the twentieth century, we point out that solar variability is unlikely to have been the dominant cause of the strong warming during the past three decades.
So, in short, things are screwy with the Sun. So why is this a happy paper? Because it makes me appreciate what I do.
They got this result by looking back at tree rings dating back 11,000 years, and figuring out how much carbon-14 there was around at the time (by working out how much there is now, and tracing the radioactive decay backwards), which they can then use to work out how active the Sun was back then, and from that, they can work out how many sunspots there were in a given year, and compare it to what we see these days. Reading about this sort of experiment always makes me really grateful that I do tabletop physics-- everything I need for my experiments is (or will be) in a room in the basement of our science building. And I can walk right up and touch it-- I don't have to try to work out how many atoms there were in the trap based on how slowly the elevator doors open on the third floor, or work out collision cross-sections from looking at sheep entrails in Bavaria.
It's little things like that that make me happy.
NIST Certified Reference Alfredo
Another Saturday night, another dinner out, another conversation leading to a silly blog post. (Yes, this will make two posts in a row that read like I'm bidding for Scalzi's AOL gig... I meant to do a Journal Club post today, but wound up doing yard work, and fell asleep on the couch instead of reading papers. Tomorrow.)
This week, we actually remembered to make a reservation at one of the places we wanted to go, and got a table (and also an interesting rant from the hostess, who told us that "The busboy is stoned. I don't care what anybody says, I know he's stoned."). Kate had Fettucine Alfredo, which was fine, but not exactly the same as her usual image of Fettucine Alfredo. She commented that it's sort of strange that such a simple dish is such a hit-or-miss proposition at restaurants. Which reminded me of the theory of reference foods.
Back when I was at NIST, I had a colleague (who's now at JPL) who had a scientific method for restaurant comparison. He believed that there were certain dishes that served as a reference point for particular cuisines: if you went to a new Thai place, for example, you needed to try the Pad Thai, because that's the dish that you use to compare Thai restaurants to one another. For Vietnamese places, it was Lemon Grass Chicken. The only fair comparison between restaurants, the theory goes, is one that compares different versions of the exact same dish.
The criteria for determining the reference dish are pretty straightforward: it needs to be something distinctive to that cuisine, but also very common, so that you're more or less guaranteed to find it in any restaurant of that particular type. It should also be something relatively simple, but just complicated enough for skill in preparation to make a clear difference. And it should ideally have some connection to the rest of the dishes.
For Indian food, then (and I'm extending Bill's theory, now, because I don't recall him mentioning Indian food), something like Chicken Tikka Masala would probably be the reference dish. It's on the menu in pretty much every Indian restaurant, and involves both the tandoor oven and a fairly basic sauce, so you find out everything you need to know about an Indian restaurant.
It's not entirely clear what the standard Italian dish would be. It's clearly not Fettucine Alfredo, as the quality of that particular dish doesn't seem to be well correlated with the quality of the restaurant (the place we were at Saturday is a good restaurant, but Kate wasn't entirely happy with the alfredo). It's probably something like Chicken Parmagiana (which gets you information about their cheese and tomato sauce), but that might be too generic these days to really be a good indicator.
We don't eat French or Mexican food often enough to really have a good idea of what to use for those cuisines. Suggestions are welcome.
Of course, it's a little tricky to reconcile the theory of reference foods with my personal Law of New Restaurants: "Always Order the Special." (On the theory that they've put more effort into making the specials, making those the best gauge of the kitchen's potential.) I suspect the solution is to always go with someone like Kate, who can be counted on to order boring things like Fettucine Alfredo, but more experimentaion is clearly called for.
Until Then, Don't You Go Changin'
With the election being over, I'm cutting way back on my blog reading, which means I've had time to get to some of the tasks that I've been shamefully neglecting around the house. Such as ripping the last few dozen CD's in my collection onto iTunes, and putting all the CD's into binders (we got rid of the wooden CD racks we'd been using when we got a new TV stand unit, which freed up a bunch of space in the living room). I'm currently ripping Tom Waits CD's, including a bunch of his "lounge singer from another dimension" stuff, which reminded me of a post that I thought of writing last weekend, but never got to.
Last Saturday, Kate and I felt like getting out of the house for a bit, so we headed out to check out the local restaurant scene. Of course, being horribly disorganized people, we didn't actually plan far enough ahead to get reservations, and after being shut out of a couple of places, we ended up seated in the bar area of a slightly fancier place than we'd been intending to go to. It's a place that is sort of trying to hang on to the air of a hip place from the Rat Pack era, and Kate and I probably lowered the mean age by a good ten or fifteen years when we walked in the door.
The food was very good, but sadly, they had a lounge act. Or, rather, they had one guy and a selection of pre-recorded music that he sang along to. He did all the obvious sorts of songs-- "Summer Wind," "That's Amore," "Mack the Knife"-- and was a reasonably good mimic of the singers he was trying to mimic.
Of course, there was something sort of desperately pathetic about the whole enterprise, starting with his cheesey between-song patter (I was half waiting for the Bill Murray "Star Wars" song), and ending with the rote exclamations of excitement that went with the songs. Yelling "One more time!" to a pre-recorded backing band is just sad. The saddest bit of all, though, was when somebody requested Billy Joel's "Just the Way You Are," and he couldn't do it (he did "New York State of Mind" instead). I would've thought that song would be a compulsory figure for the cheesey lounge singer licensing process.
The question this raised in my mind, though, is why, exactly, is somebody singing other people's music automatically considered to be pathetic? Leave aside the whole backing band on CD thing-- even if he'd been playing the songs himself on a piano, it would've been vaguely sad. That's just not taken seriously these days-- if you're not singing your own songs (or at least doing covers with a different spin on them), you're a hopeless loser.
But then, if you think about it, the people he's imitating generally didn't write their own stuff. In fact, they all tended to sing the same songs-- there are songs that we associate with specific singers, but that didn't seem to stop any of the other singers in that genre from doing their own versions. And many of the really famous songs were written by people who didn't record anything at all.
It's not clear to me exactly when that stopped being the norm, but it's sometime in the early rock era. The Beatles started out as a cover band, and had some hits doing other people's songs, but they really rose to greatness doing their own material. And after that, it's nearly all originals, save for the occasional Dylan cover. The Motown empire was built on a stable of great songwriters doling out hits to some fairly interchangeable singers (I can't reliably tell the Four Tops from the early Temptations, for example), but somewhere in the Seventies, even that started to shift (probably around the time of What's Going On, but then I may be giving Marvin Gaye too much credit). These days, the only people who seem to do songs written by somebody else are boy bands and other teen pop acts, and that's one of the reasons they're objects of scorn for most music fans (fabulously wealthy objects of scorn, mind...).
I'm sure that there have been Masters theses (at least) written about this shift in pop music, but I'm not sure exactly where the line is. When and how did the shift from doing songs out of a common pool to doing primarily original material occur, and is there any particular person or group who deserves credit or blame?