A post this week from a work-related blog I read mentioned the Large Hadron Collider. This post was not really about the LHC itself, or CERN, or the experiments they’re doing, which have already been heavily covered recently. It was about the massive data collection network involved in the experimentation.
This page from CERN has information about the LHC Computing Grid (LCG). If you’re into data centres, have a read.
Having worked in networking for many years I find the data throughput visualization tools to be really interesting. You can see, hour by hour, the amount of data throughput for all the different experiments that make up the LHC. A couple of hours before I wrote this they hit 380 megabytes per second across all systems, with Alice (where they hope to detect quark-gluon plasma), Atlas (where they hope to spot the Higgs boson, evidence of dark matter, and answer questions about whether there might be a higher number of spacetime dimensions than we think there are), and CMS (similar to Atlas, but using different methods of detection) being by the largest.
Switch to a daily view, though, and you’ll see that they’re actually in a very low data-collection mode at the moment: there was a local peak around 1250 MB/s back on 24 August. The largest average throughput this year was about 2100 MB/s in late May. Lots of data, and a really neat way of watching progress.
A word on the similar goals of the Atlas/CMS detectors: I heard on BBC Radio 4 the other morning that there’s a pretty healthy dose of rivalry between the two scientific teams about who might make their discoveries first.