Computing in High Energy and Nuclear Physics (CHEP) is a major series of international conferences for physicists and computing professionals in the fields of high energy and nuclear physics, computer science and information technology.

CHEP'07 took place in Victoria, Canada, on 2–7 September and was attended by some 470 people from all over the HEP world. It was preceded by a two-day workshop on the Worldwide LHC Computing Grid (WLCG), attended by nearly 200 people. This was the last CHEP conference before the LHC start-up, and most talks covered the preparations for the different components of LHC experiment computing, from online data taking and event processing to event reconstruction and analysis.

To continue with statistics, 429 abstracts were submitted from 1208 authors spread across the seven programme tracks: online computing; software components, tools and databases; computer facilities, production Grids and networking; collaborative tools; distributed data analysis and information management; event processing; and Grid middleware and tools (figure 1). Owing to this large number of contributions, around 50% of the offers were given as posters in two sessions. Each session was on display for two days, with fixed times when the poster authors were available by their posters for discussions.

Looking ahead

The evolution of computing technology over the next few years was covered in several plenary talks. One such talk was "Towards petascale and exascale computing" by Jim Sexton from IBM Research – IBM was one of the conference's gold sponsors. Although Sexton believes Moore's law still holds, we are approaching fundamental limits; every time we want to take a step forward we need to make work-arounds, and much more performance gain is coming from parallelism in the cores. IBM's Blue Gene solution is an example of this, with configurations ranging from one to 72 racks and up to 294,912 cores. However, although adding cores to increase the power may still be possible, writing programs that make optimal use of such computers is difficult.

Memory will be the dominant cost for upcoming computers, and this theme was taken up by another sponsor when Steve Pawlowski from Intel gave his talk on multicore processors. He gave examples of how memory can be brought logically and physically closer to the processing units. Like the IBM speaker he talked of increased performance through more parallelism, but improving raw chip performance is offering diminishing returns. In a plenary session on "LHC software and CPU architecture" Sverre Jarp (CERN/IT) recommended increasing the level of instruction parallelism in LHC software to help compilers produce more effective machine code. We need to improve the multithreading capabilities of the applications and, more generally, to simplify and restructure the code. As we approach billion-transistor processors "we should increase the agility of our software structures", said Sverre.

Another gold sponsor talk, by Lim Goh of SGI, covered some aspects of the physical expansion of computing: how computer facilities are gradually running out of capacity in terms of space, power and even weight. SGI has produced a rack system packed so tightly that it exceeds the weight rating of many computer floors!

A squeeze on funding?

Returning to today's realities, several speakers noted that LHC experiments have benefited from an unprecedented level of support from Grid projects backed by national funding agencies, although Miron Livny from the University of Wisconsin, US, believes the days of such "easy funding" may be drawing to a close.

Experiments have begun demonstrating production-level processing on Grids but not yet at the scale needed for full LHC running conditions. There is still a lot of work expected in the final year of preparation. "We are getting there slowly," said Les Robertson (CERN/IT), speaking about the WLCG, "but from now until first beams we need continuous testing, driven by experiments, with realistic scenarios, good monitoring and measurements, and the proactive participation of all sites, developers and especially storage experts". According to Jamie Shiers (CERN/IT), despite the considerable progress made since CHEP'04, the main areas of concern remain unchanged:

• data storage, and management and integration with experiment frameworks; and

• reliability and usability, which will become critical for widespread use in data analysis, for example.

In his summary talk, Matthias Kasemann of CERN and DESY reminded the audience that we expect LHC to be operational next summer, and therefore by spring 2008 the experiments must be ready to take data. He closed the meeting by thanking the principle organizers. The next CHEP will be held in Prague in spring 2009.

As usual a detailed trip report has been written by Alan Silverman, with contributions from many people who are listed in the report. It is saved in CDS as CERN-IT-Note-2007-013 at http://cdsweb.cern.ch/record/1057583. You can also read a summary in the CERN Courier Computing News section, November issue.