CMSC 32001
Topics in Programming Languages
Winter 2016


General Information

Instructor: John Reppy Ryerson 256
Lecture: M 3-5 (Ry 255)
SS-106

Description

The focus of this seminar will be on high-level languages and models for programming GPUs. We will begin by looking at the architectural features of GPUs that both make them very fast and very difficult to program. With that background in place, we will read and discuss recent (and some not-so-recent) papers on languages and models for GPUs.

Note: The seminar was originally scheduled to meet twice a week, but we are meeting once a week for two hours instead. The new meeting time and location is on Mondays from 3pm to 5pm in Ry 255.

Reading by week

Week 2 (January 12, 2016)

For week 2, please take a look at the paper Parallel Prefix Sum (Scan) with CUDA, which was Chapter 39 of GPU Gems 3.

Week 3 (January 20, 2016; Note different meeting day for this week only)

For week 3, we will look at an approach for handling tree traversals in GPU programs that has been developed by researchers at Purdue. There are two papers:

Week 4 (January 25, 2016)

More discussion of the techniques from last week. One additional paper: There are two papers:

Week 5 (February 1, 2016)

This week we will look at ray tracing on GPUs and the use of persistent threads as an implementation technique. There are several papers:

Week 6 (February 8, 2016)

This week we will look at a couple of low-level languages that have been designed for GPU programming.

Week 8 (February 26, 2016)

This week we will look at several papers on flattening nested-data parallelism:

Week 9 (February 29, 2016)

This week we will look at more papers on flattening nested-data parallelism:

Week 10 (March 7, 2016)

This week we will finish up a discussion of flattening and then look at some papers on piecewise execution of NDP programs.

Week 11 (March 14, 2016)

This week we will look at some optimization techniques for NDP.


Last revised: February 29, 2016