Introduction to Parallel Computing: mudanças entre as edições
Linha 66: | Linha 66: | ||
=== Lectures === | === Lectures === | ||
==== Partial listing & Tentative Outline ==== | ==== Partial listing & Tentative Outline ==== |
Edição das 13h03min de 24 de abril de 2017
This is the main page of a graduate and senior undergraduate level course in parallel computing being taught in 2017 (semester 2016/2 due to a series of work stoppages) at the Polytechnic Institute IPRJ/UERJ. It is generally useful for programmers at the advanced level in the fields of scientific and multimedia programming.
General Info
- Instructor: prof. Ricardo Fabbri, Ph.D. Brown University
- Meeting times: Tuesdays 2:20pm-4pm Thursdays 8:50am - 10:30am, room (205?)
- Evaluation criteria: 1 quizz at midterm (60%), practical projects (40%).
- Forum for file exchange and discussion: email and IRC #labmacambira
Pre-requisites
- Linux - intermediate to advanced (will be reviewed as needed) - read Recommended Reading
- C/C++ - intermediate to advanced (will be reviewed) - read Recommended Reading
- Basic understanding of computer architecture - read "Computer Systems: A Programmer's perspective" listed at Recommended Reading.
Approximate Content
The course focuses on software techniques for parallel computing. We are aiming at a comprehensive treatment on different types of practical parallel programming techniques
Part I
- process-oriented parallel programming and concurrency
- thread programming/thread safety
- single-core vector instructions
- multi-processor and multi-core programming
Part II
- High level parallel programming paradigms and language constructs: focus of the course
- Recent technology to make parallel programming easier, more scalable and widely useful
- How languages such as Lua, Go, and Scilab and others support parallel concepts in powerful ways
- For interested students, we'll also cover the following
- mapreduce/hadoop
- MPI
- Cuda/OpenCL
- Dataflow techniques
Each of the above techniques would merit a course of their own, as done in many of the best universities. Therefore we aim at practicing fundamentals in the first half of the course, and we specialize in the later part of the course towards students' interests and graduate research.
Main Resources
- Textbooks
- 1st part of the course: "Is Parallel Programming Hard, and, if so, what can you do about it?" - Paul E. McKenney / IBM (editor).
- For MPI: "An Introduction to Parallel Programming" , by Peter Pacheco[1]
- For Cuda/OpenCL: Programming Massively Parallel Processors: A Hands-On Approach[2]
- General textbook: A. Grama, A. Gupta, G. Karypis, V. Kumar, Introduction to Parallel Computing, Second Edition, Addison-Wesley, 2003.
- Algorithms-oriented textbook: Algorithms: sequential, parallel, and distributed, Kenneth A. Berman, Jerome L. Paul
- For more theoretical info, see the chapter on parallel algorits in Cormen's classical algorithms book.
- Mapreduce, GFS, and Bigtable:
- Papers from Google
- http://code.google.com/edu/parallel/mapreduce-tutorial.html
- Wikipedia
- Hadoop documentation
- Presentations from IBM and Intel: Cilk, etc.
- Wikipedia pages
- Rice lecture notes on Parallel Computing [3]
- Other Resources
- http://www-users.cs.umn.edu/~karypis/parbook/
- http://www.cse.iitd.ernet.in/~subodh/courses/CSL860/
- Running Hadoop On Ubuntu Linux Multi and Single Node Cluster [4]
- Short course on CUDA at emc 2012 conference: http://josericardojunior.com/?p=219
Lectures
Partial listing & Tentative Outline
- Overview of parallel computing: https://computing.llnl.gov/tutorials/parallel_comp/
- Review of Linux:
- Do the Learning the Unix Programming Environment tutorial to brush up on your skills
- See the book Running Linux from the Recommended Reading
- Review of C/C++ (finished 28/Aug)
- Fundamental programming techniques: processes and threads
- Read The Unix Programming Environment for some classic multi-process programming[5]
- What is a process: http://linuxgazette.net/133/saha.html
- 30/Aug: We'll be following the McKenney Book, ch 3 at the lab.
- Up to (and including) 18/Sept: We followed McKenney's code samples for counting algorithms and his Threading API in detail
- 20/Sept: Finished statistic counters and showed how to read assembly language generated by GCC, and how to disassemble binary objects
- 25/Sept: Finished theoretical thread content for the course: up until ch. 5 of McKenney's Book + Semaphores + Thread Safety.
- Homework 2 (see below)
- Cuda/OpenCL (02/Oct)
- Project 1 handout (see below) due Oct. 23
- Mapreduce/Hadoop
- MPI
Homework
We only accept binary document files in PDF or another open format such as .odt
For programming assignments, the student must submit the source code together with documentation inside a single folder, compressed with .zip, tar.gz or tar.bz2, containing the student name, course and date.
All electronically submitted material must be handed in with the string [parallel] in the subject of the email. Expect to receive an automatic confirmation should evet
Homework 1
Our insitute is planning on buying a supercomputer. Therefore, our first homework will consist on studying these systems in detail.
- The class shall be divided into groups of 2 people.
- Each group must pick one top 500 (http://www.top500.org) supercomputer and write on this wiki:
- Team number:
- Supercomputer:
- Each class will start with a 20min presentation from a random group (or a volunteer group).
- A handout must also be provided on Tue August 28
- Your project must not be a plain copy from wikipedia!
Contents of the presentation and report
- Overview of the system
- Glossary of terms
- How to program for the system
- Software infrastructure
- How does this differ from a conventional cluster.
Groups
- Team 1: Julio Stutz & Joel Sanchez
- Team 2: Claudir Oliver & Luiz Rosalba
- Team 3: Mateus Guida
- Team 4: Lyvia Aloquio & Stella Oggioni
- Supercomputer: Grifo04 (PETROBRAS/Brazil)
- Team 5: Ricardo Dias
- Team 6: Frederico Santos
- Supercomputer: Helios (International Fusion Energy Research Centre (IFERC), EU(F4E) - Japan Broader Approach collaboration) (report)
- Team 7: Thiago Luiz
- Supercomputer: Pleiades - NASA/Ames Research Center/ NAS - United States.
Homework 2: limit counters
Describe the limit counter implementations of sec. 4.3 in McKenney's Book, answering:
- list and explain the advantages of each counter
- explain the main concepts of each method in your words
Due date: Oct 1 (was: Sept 27)
Project 1
- The class will be divided into interest groups
- The interest groups will each pick one of the following technologies or else propose another one of their liking.
- mapreduce/hadoop
- MPI
- Cuda/OpenCL
- Hybrid technologies (Distributed + Cuda)
- Languages designed for concurrency: Google Go, Erlang, etc.
- Programming for the Playstation 3 parallel architecture[9]
- OpenGL/GLSL
- Google Pregel
- The project will consist in a series of presentations by the group members individually.
- Grade will be asigned to each presentation individually
Groups to Project 1
- MPI: Claudir Oliveira - 25/10/2012 (slides) (code)
- Hybrid technologies (Distributed + Cuda): Ricardo Dias (slides) (code)
- Cuda/OpenCL: Joel Sanchez (slides) (code)
- OPENGL/GLSL : Luiz Rosalba - 08/11/2012 (slides)
- mapreduce/hadoop : Julio Stutz (slides)
- Google Go : Mateus Guida (slides)
- Programming for the Playstation 3 parallel architecture: Thiago Luiz
Project 2
- This project will consist on a practical programming problem from the student's research. The student is required to describe a problem from his research and present a parallel implementation of some aspect of it
- This will be evaluated through actual code and accompanying monograph
- Grade: up to 4 extra points in the exam grade
Topic Suggestions
For those who haven't found a suitable research topic with their own advisors, here is a list of possible ideas. These ideas can have some of my guidance during the project, but please let your advisors know about this. Also, not all ideas may be very easy to develop a project on and may require some thinking on your part in terms of goals and objectives.
Try to email the original authors for the relevant papers for your project in order to get code. You have to understand the paper, understand the code, and run the code. You will also have to write code or a modification of the existing code, although you have limited time for this.
If you need any of the files below but can't access them from home, let me know and I will email you a copy.
- "Design and Implementation of a Wide Area, Large-Scale Camera Network", Kuo et. al. 2010
- Technology: distributed computing / hadoop
- Abstract
We describe a wide area camera network on a campus setting, the SCALLOPSNet (Scalable Large Optical Sensor Network). It covers with about 100 stationary cameras an expansive area that can be divided into three distinct re- gions: inside a building, along urban paths, and in a re- mote natural reserve. Some of these regions lack connec- tions for power and communications, and, therefore, ne- cessitate wireless, battery-powered camera nodes. In our exploration of available solutions, we found existing smart cameras to be insufficient for this task, and instead designed our own battery-powered camera nodes that communicate using 802.11b. The camera network uses the Internet Pro- tocol on either wired or wireless networks to communi- cate with our central cluster, which runs cluster and cloud computing infrastructure. These frameworks like Apache Hadoop are well suited for large distributed and parallel tasks such as many computer vision algorithms. We discuss the design and implementation details of this network, to- gether with the challenges faced in deploying such a large scale network on a research campus. We plan to make the datasets available for researchers in the computer vision community in the near future.
- "Web-Scale Computer Vision using MapReduce for Multimedia Data Mining", White, Yeh, Lin and Davis 2010
- Technology: Mapreduce
- Abstract: This work explores computer vision applications of the Map-
Reduce framework that are relevant to the data mining com- munity. An overview of MapReduce and common design patterns are provided for those with limited MapReduce background. We discuss both the high level theory and the low level implementation for several computer vision algo- rithms: classifier training, sliding windows, clustering, bag- of-features, background subtraction, and image registration. Experimental results for the k-means clustering and single Gaussian background subtraction algorithms are performed on a 410 node Hadoop cluster.
- "Building Rome on a Cloudless Day", Frahm et al, 2012
- Technology: Cuda
- Abstract:
This paper introduces an approach for dense 3D reconstruc- tion from unregistered Internet-scale photo collections with about 3 mil- lion of images within the span of a day on a single PC (“cloudless”). Our method advances image clustering, stereo, stereo fusion and structure from motion to achieve high computational performance. We leverage geometric and appearance constraints to obtain a highly parallel imple- mentation on modern graphics processors and multi-core architectures. This leads to two orders of magnitude higher performance on an order of magnitude larger dataset than competing state-of-the-art approaches.
Other interesting works
- "GraphLab: A Distributed Framework for Machine Learning in the Cloud", Low etal., 2011
Machine Learning (ML) techniques are indispensable in a wide range of fields. Unfortunately, the exponential in- crease of dataset sizes are rapidly extending the runtime of sequential algorithms and threatening to slow future progress in ML. With the promise of affordable large- scale parallel computing, Cloud systems offer a viable platform to resolve the computational challenges in ML. However, designing and implementing efficient, provably correct distributed ML algorithms is often prohibitively challenging. To enable ML researchers to easily and effi- ciently use parallel systems, we introduced the GraphLab abstraction which is designed to represent the computa- tional patterns in ML algorithms while permitting effi- cient parallel and distributed implementations. In this paper we provide a formal description of the GraphLab parallel abstraction and present an efficient distributed implementation. We conduct a comprehen- sive evaluation of GraphLab on three state-of-the-art ML algorithms using real large-scale data and a 64 node EC2 cluster of 512 processors. We find that GraphLab achieves orders of magnitude performance gains over Hadoop while performing comparably or superior to hand-tuned MPI implementations.
- "Parallel Data Mining from Multicore to Cloudy Grids", Fox et. al.
We describe a suite of data mining tools that cover clustering, information retrieval and the mapping of high dimensional data to low dimensions for visualization. Preliminary applications are given to particle physics, bioinformatics and medical informatics. The data vary in dimension from low (2- 20), high (thousands) to undefined (sequences with dissimilarities but not vectors defined). We use deterministic annealing to provide more robust algorithms that are relatively insensitive to local minima. We discuss the algorithm structure and their mapping to parallel architectures of different types and look at the performance of the algorithms on three classes of system; multicore, cluster and Grid using a MapReduce style algorithm. Each approach is suitable in different application scenarios. We stress that data analysis/mining of large datasets can be a supercomputer application.
- "Fast face tracking using parallel particle filter algorithm", Liu et. al., 2009
This paper proposed a multi-cue based face tracking algorithm with the help of parallel multi-core processing. Due to illumination and occlusion problems, face tracking usually does not work stably based on a single cue. Three different visual cues, color histogram, edge orientation histogram and wavelet feature, are integrated under the framework of particle filter to improve the tracking performance considerably. To handle the huge amount of computation cost resulted from the introduced multi-cue strategy, a map-reduce thread model is designed to parallel and speed up the observation steps. Besides, an online updating strategy makes our algorithm adaptable to some slight face rotations. The experimental results demonstrate that our proposed face tracking algorithm works robustly for cluttered backgrounds and different illuminations. The multi-core parallel scheme achieves a good linear speedup compared to the corresponding sequential algorithms.
- "CudaGIS: Report on the Design and Realization of a Massive Data Parallel GIS on GPUs", Zhang, You,
We report the design and realization of a high- performance parallel GIS, i.e., CudaGIS, based on the General Purpose computing on Graphics Processing Units (GPGPU) technologies. Still under active developments, CudaGIS currently supports major types of geospatial data (point, polyline, polygon and raster) and provides modules for spatial indexing, spatial join and other types of geospatial operations on such geospatial data types. Experiments have demonstrated 20-40X and 1000-10000X speedups over serial CPU implementations on main-memory and disk-resident systems, respectively.
- "A Parallel Clustering Method Study Based on MapReduce", Zhanquan and Fox
Clustering is considered as the most important task in data mining. The goal of clustering is to determine the intrinsic grouping in a set of unlabeled data. Many practical application problems should be solved with clustering method. It has been widely applied into all kinds of areas, such marketing, biology, library, insurance, earth-quake study, and World Wide Web and so on. Many clustering methods have been studied, such as k-means, Fisher clustering, and Koehon clustering and so on. In many kinds of areas, the scale of data set becomes larger and larger. Classical clustering method will not work to deal with large scale data set. The study of clustering methods based on large scale data is considered as an important task. MapReduce is taken as the most efficient model to deal with data intensive problems. Many data mining methods based on MapReduce have been studied. In this paper, parallel clustering method based on MapReduce is studied. The research mainly contributes the following aspects. Firstly, it determines the initial center objectively. Secondly, information loss is taken as the distance metric between two samples. Thirdly, the clustering results are visualized with interpolation MDS method. The efficiency of the method is illustrated with a practical DNA clustering problem.
- "Sequential and mapreduce-based algorithms for constructing an in-place multidimensional quad-tree index for answering fixed-radius nearest neighbor queries", Andreica and Tapus
Answering fixed-radius nearest neighbor queries constitutes an important problem in many areas, ranging from geographic systems to similarity searching in object databases (e.g. image and video databases). The usual ap- proach in order to efficiently answer such queries is to construct an index. In this paper we present algorithms for constructing a multidimensional quad-tree index. We start with well-known sequential algorithms and then adapt them to the MapRe- duce computation model, in order to be able to handle large amounts of data. In all the algorithms the objects are indexed in association with quad-tree cells (or nodes) which they intersect (plus possibly a few other nearby cells). When processing a query, multiple quad-tree cells may be searched in order to find the answer.
- "Fast Homing Techniques for Autonomous Robots using Sparse Image Waypoints and Design of Vision-based Indoor Localization as Cloud Computing Services", PhD Thesis, 2011
- "Building and Using a Database of One Trillion Natural-Image Patches", Arietta and Lawrence, IEEECGA 2011
Class exercises
- In-class assignment (extra .1 in the final test): Compare 3 different implementation of statistic counters: array of per-thread variables, eventuallly consistent counter and GCC __thread variables
- Only Julio Stutz scored the extra +0.1 on the final exam.
Keywords
Portuguese: Programação Paralela, Introdução à Computação Paralela