ParaForming: Forming Parallel Haskell Programs using Novel Refactoring Techniques by Prof Kevin Hammond

Event details

  • When: 21st November 2011 14:00 - 15:00
  • Where: Phys Theatre C
  • Series: CS Colloquia Series
  • Format: Colloquium

Abstract

Despite Moore’s “law”, uniprocessor clock speeds have now stalled. Rather than using single processors running at ever higher clock speeds, it is common to find dual-, quad- or even hexa-core processors, even in consumer laptops and desktops. Future hardware will not be slightly parallel, however, as in today’s multicore systems, but will be massively parallel, with manycore and perhaps even megacore systems becoming mainstream. This means that programmers need to start thinking parallel. To achieve this they must move away from traditional programming models and development processes that offer parallelism as an bolted-on afterthought.

This talk introduces the idea of “paraforming”, a new approach to constructing parallel functional programs using formally-defined refactoring transformations.
We show how parallel programs can be built from a small number of primitive Haskell building blocks, and describe some new refactorings for Parallel Haskell that capture common parallel abstractions, such as divide-and-conquer and data parallelism using these building blocks. Using a paraforming approach, we are able to easily obtain significant and scalable speedups (up to 7.8 on an 8-core machine).

Biography

Kevin Hammond has worked extensively in the field of advanced programming language design and implementation, with a focus on cost and performance issues. His work concentrates on functional language designs, including that of the standard non-strict functional language Haskell, where he served on the international design committee, and worked on the dominant compiler, GHC. Since receiving his PhD in 1989, he has published widely in the general area of parallel programming, producing over 80 books, book chapters, journal papers and other refereed publications focusing on parallel computing, domain-specific programming languages, real-time systems, cost issues, adaptive run-time environments, lightweight concurrency, high-level programming language design and performance monitoring/visualisation. He has run over 20 successful national and international research projects, and is a founder member of IFIP WG 2.11 (Generative Programming).