One columnist doesn’t understand science

Simon Jenkins is a columnist for a couple of UK newspapers, an author, and an editor. I agree with some of his positions; the benefits of nuclear power, for instance. But lately I’m disagreeing with him a lot more than I’m agreeing.

Back in January he tried to argue that the UK’s reaction to the swine flu threat was overblown by scaremongering scientists. He doesn’t seem to understand probabilities, or the value of problem avoidance (that is, that an appropriate and timely response may have prevented a crisis, which is really the entire point).

Now Jenkins is taking shots at Martin Rees’ recent Royal Society address. This time he’s saying that scientists are money-grubbers who won’t allow anyone to question them.

Science, [we are told], should “engage broadly with society and public affairs”. In other words, it should get more money.

The Large Hadron Collider [is] on a par with aircraft carriers and Olympic games for useless extravagance.

It’s a good thing that no useful scientific discoveries have ever been made by accident, then, Simon, or come from an unexpected source. He goes on and on. University science programmes get more funding per student than arts! he exclaims. Well, duh, those programmes require far more labs and equipment.

He also employs that slimy tactic of making someone’s position seem suspect by putting lots of double quote around individual words and phrases:

Yet [Rees] promotes just such theft. He wants more money or Britain’s “success in attracting mobile talent will be at risk”. Unless we continue to attract and nurture foreigners, we will “not retain international competitiveness”. Less cash would jeopardise the nation’s status in “the international premier league”. It would damage Britain’s “standing”, its “leverage”, indeed, the very “sustainability of its society”.

Jenkins is cranky. I’m not sure why. Luckily the comments on his column have plenty of people calling him out on his bullshit.

Further LHC delays

Nooooo!

From the BBC:

A director at the Large Hadron Collider in Geneva has told BBC News that some mistakes were made in construction.

Dr Steve Myers said these faults will delay the machine reaching its full potential for two years.

The atom smasher will reach world record power later this month at 7 trillion electron volts (TeV).

But the machine must close at the end of 2011 for up to a year for work to make the tunnel safe for proton collisions planned at twice that level.

The machine only recently restarted after being out of action for 14 months following an accident in September 2008.

LHC out of commission for 2 months

From the CERN web page for the Large Hadron Collider:

Geneva, 20 September 2008. During commissioning (without beam) of the final LHC sector (sector 34) at high current for operation at 5 TeV, an incident occurred at mid-day on Friday 19 September resulting in a large helium leak into the tunnel. Preliminary investigations indicate that the most likely cause of the problem was a faulty electrical connection between two magnets, which probably melted at high current leading to mechanical failure…

A full investigation is underway, but it is already clear that the sector will have to be warmed up for repairs to take place. This implies a minimum of two months down time for LHC operation.

IT throughput at the LHC data grid

A post this week from a work-related blog I read mentioned the Large Hadron Collider. This post was not really about the LHC itself, or CERN, or the experiments they’re doing, which have already been heavily covered recently. It was about the massive data collection network involved in the experimentation.

This page from CERN has information about the LHC Computing Grid (LCG). If you’re into data centres, have a read.

Having worked in networking for many years I find the data throughput visualization tools to be really interesting. You can see, hour by hour, the amount of data throughput for all the different experiments that make up the LHC. A couple of hours before I wrote this they hit 380 megabytes per second across all systems, with Alice (where they hope to detect quark-gluon plasma), Atlas (where they hope to spot the Higgs boson, evidence of dark matter, and answer questions about whether there might be a higher number of spacetime dimensions than we think there are), and CMS (similar to Atlas, but using different methods of detection) being by the largest.

Switch to a daily view, though, and you’ll see that they’re actually in a very low data-collection mode at the moment: there was a local peak around 1250 MB/s back on 24 August. The largest average throughput this year was about 2100 MB/s in late May. Lots of data, and a really neat way of watching progress.

LHC data throughput year to date (click to enlarge)

A word on the similar goals of the Atlas/CMS detectors: I heard on BBC Radio 4 the other morning that there’s a pretty healthy dose of rivalry between the two scientific teams about who might make their discoveries first.

IT throughput at the LHC data grid

A post this week from a work-related blog I read mentioned the Large Hadron Collider. This post was not really about the LHC itself, or CERN, or the experiments they’re doing, which have already been heavily covered recently. It was about the massive data collection network involved in the experimentation.

This page from CERN has information about the LHC Computing Grid (LCG). If you’re into data centres, have a read.

Having worked in networking for many years I find the data throughput visualization tools to be really interesting. You can see, hour by hour, the amount of data throughput for all the different experiments that make up the LHC. A couple of hours before I wrote this they hit 380 megabytes per second across all systems, with Alice (where they hope to detect quark-gluon plasma), Atlas (where they hope to spot the Higgs boson, evidence of dark matter, and answer questions about whether there might be a higher number of spacetime dimensions than we think there are), and CMS (similar to Atlas, but using different methods of detection) being by the largest.

Switch to a daily view, though, and you’ll see that they’re actually in a very low data-collection mode at the moment: there was a local peak around 1250 MB/s back on 24 August. The largest average throughput this year was about 2100 MB/s in late May. Lots of data, and a really neat way of watching progress.

LHC data throughput year to date (click to enlarge)

A word on the similar goals of the Atlas/CMS detectors: I heard on BBC Radio 4 the other morning that there’s a pretty healthy dose of rivalry between the two scientific teams about who might make their discoveries first.

Large Hadron Collider: the engineering

As I said the other day, the BBC is getting all fired up (pardon the pun) about the Large Hadron Collider at CERN going live this week. They’ve got tons of online stuff about it.

I just found this: a special write-up on the engineering of the LHC. It’s a phenomenal project: a 27km tunnel, 1740 supercooled superconducting magnets, a frozen underground river, and lowering a 2000t detector component 100m with a 20cm clearance.

Hoping for a collision