Programming Massively Parallel Processors with CUDA

Programming Massively Parallel Processors with CUDA

A daily iTunes U and Technology podcast

Good podcast? Give it some love!
Programming Massively Parallel Processors with CUDA

Programming Massively Parallel Processors with CUDA

Episodes
Programming Massively Parallel Processors with CUDA

Programming Massively Parallel Processors with CUDA

A daily iTunes U and Technology podcast
Good podcast? Give it some love!
Rate Podcast

Episodes of Programming Massively Parallel Processors

Mark All
Search Episodes...
Michael Garland, of NVIDIA Research, discusses sorting methods in order to make searching, categorization, and building of data structures in parallel easier. (April 20, 2010)
John Nicholis discusses how to optimize with Parallel GPU Performance. (May 20, 2010)
Avi Bleiweiss delivers a lecture on the path planning system on the GPU. (May 18, 2010)
Students are taught how to effectively program massively parallel processors using the CUDA C programming language. Students also develop familiarity with the language itself and are exposed to the architecture of modern GPUs. (April 15, 2010)
Steven Parker, Director of High Performance Computing and Computational Graphics at NVIDIA, speaks about ray tracing. (May 11, 2010)
William Dally guest-lectures on the end of denial architecture and the rise of throughput computing. (May 13, 2010)
Michael C Shebanow, Principal Research Scientist with NVIDIA Research, talks about the new Fermi architecture. This next generation CUDA architecture, code named "Fermi" is the most advanced GPU computing architecture ever built. (May 6, 2010)
Jonathan Cohen, a Senior Research Scientist at NVIDIA Research, talks about solving partial differential equations with CUDA. (May 4, 2010)
Nathan Bell from NVIDIA Research talks about sparse matrix-vector multiplication on throughput-oriented processors. (April 29, 2010)
Nathan Bell of NVIDIA Research talks about Thrust, a productivity library for CUDA. (April 27, 2010)
David Tarjan continues his discussion on parallel patterns. (April 22, 2010)
Lukas Biewald of Delores Labs, discusses performance considerations including: memory coalescing, shared memory bank conflicts, control-flow divergence, occupancy, and kernel launch overheads. (April 13, 2010)
Jared Hoberock of NVIDIA lectures on CUDA memory spaces for CS 193G: Programming Massively Parallel Processors. (April 8, 2010)
Atomic operations in CUDA and the associated hardware are discussed. (April 6, 2010)
science, technology, computer science, CS, software engineering, programming, parallel processors, CUDA, language, code, Computers, coding, MP0, MP1, hardware, software, memory management, GPU, CPU, memory, parallel code, kernel, threads, launc
Jared Hoberock of NVIDIA gives the introductory lecture to CS 193G: Programming Massively Parallel Processors. (March 30, 2010)
Rate
Contact This Podcast

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features