Parallel Programming in Java

Start Date: 02/23/2020

Course Type: Common Course

Course Link: https://www.coursera.org/learn/parallel-programming-in-java

Explore 1600+ online courses from top universities. Join Coursera today to learn data science, programming, business strategy, and more.

About Course

This course teaches learners (industry professionals and students) the fundamental concepts of parallel programming in the context of Java 8. Parallel programming enables developers to use multicore computers to make their applications run faster by using multiple processors at the same time. By the end of this course, you will learn how to use popular parallel Java frameworks (such as ForkJoin, Stream, and Phaser) to write parallel programs for a wide range of multicore platforms including servers, desktops, or mobile devices, while also learning about their theoretical foundations including computation graphs, ideal parallelism, parallel speedup, Amdahl's Law, data races, and determinism. Why take this course? • All computers are multicore computers, so it is important for you to learn how to extend your knowledge of sequential Java programming to multicore parallelism. • Java 7 and Java 8 have introduced new frameworks for parallelism (ForkJoin, Stream) that have significantly changed the paradigms for parallel programming since the early days of Java. • Each of the four modules in the course includes an assigned mini-project that will provide you with the necessary hands-on experience to use the concepts learned in the course on your own, after the course ends. • During the course, you will have online access to the instructor and the mentors to get individualized answers to your questions posted on forums. The desired learning outcomes of this course are as follows: • Theory of parallelism: computation graphs, work, span, ideal parallelism, parallel speedup, Amdahl's Law, data races, and determinism • Task parallelism using Java’s ForkJoin framework • Functional parallelism using Java’s Future and Stream frameworks • Loop-level parallelism with extensions for barriers and iteration grouping (chunking) • Dataflow parallelism using the Phaser framework and data-driven tasks Mastery of these concepts will enable you to immediately apply them in the context of multicore Java programs, and will also provide the foundation for mastering other parallel programming systems that you may encounter in the future (e.g., C++11, OpenMP, .Net Task Parallel Library).

Course Syllabus

In this module, we will learn the fundamentals of task parallelism. Tasks are the most basic unit of parallel programming. An increasing number of programming languages (including Java and C++) are moving from older thread-based approaches to more modern task-based approaches for parallel programming. We will learn about task creation, task termination, and the “computation graph” theoretical model for understanding various properties of task-parallel programs. These properties include work, span, ideal parallelism, parallel speedup, and Amdahl’s Law. We will also learn popular Java APIs for task parallelism, most notably the Fork/Join framework.

Deep Learning Specialization on Coursera

Course Introduction

Parallel Programming in Java This course teaches you how to use the same object-oriented technique for parallel programming in Java that you learned in our Principles of Java course, but in practice we go further. We start by introducing the "coping" technique, which combines the advantages of closures and threads to share memory. We then explain what "forking" is and how it helps for concurrency. We conclude by showing you how to use concurrency in the context of threads, which helps to make your programs more efficient. We also learn how to use threads in the context of closures, which helps to make your programs more efficient when the code is shared between different threads. This course is the first step in completing our specialization. Course Overview video - https://youtu.be/Yi8-qzt2nM1U Parallel Programming in Practice This course teaches you how to use the same object-oriented technique for parallel programming in Java that you learned in our Principles of Java course, but in practice we go further. We start by introducing a few standard object-oriented programming models, like threads, closures, and locks. We then explain what "forking" is and how it helps for concurrency. We conclude by showing you how to use concurrency in the context of threads, which helps to make your programs more efficient. We also learn how to use locks in the context of closures, which helps to make your programs more efficient when the code is shared between different

Course Tag

Dataflow Parallel Computing Java Concurrency Data Parallelism

Related Wiki Topic

Article Example
Deterministic Parallel Java Deterministic Parallel Java (DPJ) is an extension of the Java programming language which adds parallel constructs that provide a deterministic programming model for object-oriented languages. The language extensions define a type system that a programmer (or interactive porting tool) can use to annotate Java code with type information, and a compiler can use to type-check that a DPJ program has deterministic semantics, i.e., produces the same visible output for a given input, in all executions. Parallel algorithms that cannot be expressed entirely in the statically checked type system require run-time mechanisms to enforce determinism: two key research goals are to make the type system more expressive and to minimize the need to fall back to run-time techniques. With minor modifications, language extensions should be applicable to other base OO languages, such as C++ and C#.
Parallel programming model Parallel programming models are closely related to models of computation. A model of parallel computation is an abstraction used to analyze the cost of computational processes, but it does not necessarily need to be practical, in that it can be implemented efficiently in hardware and/or software. A programming model, in contrast, does specifically imply the practical considerations of hardware and software implementation.
Java 4K Game Programming Contest The Java 4K Game Programming Contest (aka 'Java 4K' and 'J4K') is an informal contest that was started by the Java Game Programming community to challenge their software development abilities.
Parallel programming model In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its "generality": how well a range of different problems can be expressed for a variety of different architectures, and its "performance": how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a sequential language, as an extension to an existing language, or as an entirely new language.
Parallel programming model Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.
Parallel programming model A parallel programming language may be based on one or a combination of programming models. For example, High Performance Fortran is based on shared-memory interactions and data-parallel problem decomposition, and Go provides mechanism for shared-memory and message-passing interaction.
Pervasive Software DataRush is a dataflow parallel programming framework for in the Java programming language.
Java (programming language) Java is a general-purpose computer programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies as possible. It is intended to let application developers "write once, run anywhere" (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. As of 2016, Java is one of the most popular programming languages in use, particularly for client-server web applications, with a reported 9 million developers. Java was originally developed by James Gosling at Sun Microsystems (which has since been acquired by Oracle Corporation) and released in 1995 as a core component of Sun Microsystems' Java platform. The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them.
Java (programming language) The Java programming language requires the presence of a software platform in order for compiled programs to be executed. Oracle supplies the Java platform for use with Java. The Android SDK, is an alternative software platform, used primarily for developing Android applications.
Parallel programming model Consensus around a particular programming model is important because it leads to different parallel computers being built with support for the model, thereby facilitating portability of software. In this sense, programming models are referred to as "bridging" between hardware and software.
Von Neumann programming languages The differences between Fortran, C, and even Java, although considerable, are ultimately constrained by all three being based on the programming style of the von Neumann computer. If, for example, Java objects were all executed in parallel with asynchronous message passing and attribute-based declarative addressing, then Java would not be in the group.
Parallel computing Mainstream parallel programming languages remain either explicitly parallel or (at best) partially implicit, in which a programmer gives the compiler directives for parallelization. A few fully implicit parallel programming languages exist—SISAL, Parallel Haskell, SequenceL, System C (for FPGAs), Mitrion-C, VHDL, and Verilog.
Java (programming language) Java contains multiple types of garbage collectors. By default, HotSpot uses the parallel scavenge garbage collector. However, there are also several other garbage collectors that can be used to manage the heap. For 90% of applications in Java, the Concurrent Mark-Sweep (CMS) garbage collector is sufficient. Oracle aims to replace CMS with the Garbage-First collector (G1).
Java concurrency Most implementations of the Java virtual machine run as a single process and in the Java programming language, concurrent programming is mostly concerned with threads (also called lightweight processes). Multiple processes can only be realized with multiple JVMs.
Java syntax Java has built-in tools for multi-thread programming. For the purposes of thread synchronization the codice_98 statement is included in Java language.
Java memory model The Java memory model describes how threads in the Java programming language interact through memory. Together with the description of single-threaded execution of code, the memory model provides the semantics of the Java programming language.
Symposium on Principles and Practice of Parallel Programming PPoPP, the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, is an academic conference in the field of parallel programming. PPoPP is sponsored by the Association for Computing Machinery special interest group SIGPLAN.
Parallel programming model Shared memory is an efficient means of passing data between processes. In a shared-memory model, parallel processes share a global address space that they read and write to asynchronously. Asynchronous concurrent access can lead to race conditions and mechanisms such as locks, semaphores and monitors can be used to avoid these. Conventional multi-core processors directly support shared memory, which many parallel programming languages and libraries, such as Cilk, OpenMP and Threading Building Blocks, are designed to exploit.
Java performance Apart from the improvements listed here, each release of Java introduced many performance improvements in the JVM and Java application programming interface (API).
Programming paradigm For parallel computing, using a programming model instead of a language is common. The reason is that details of the parallel hardware leak into the abstractions used to program the hardware. This causes the programmer to have to map patterns in the algorithm onto patterns in the execution model (which have been inserted due to leakage of hardware into the abstraction). As a consequence, no one parallel programming language maps well to all computation problems. It is thus more convenient to use a base sequential language and insert API calls to parallel execution models, via a programming model. Such parallel programming models can be classified according to abstractions that reflect the hardware, such as shared memory, distributed memory with message passing, notions of "place" visible in the code, and so forth. These can be considered flavors of programming paradigm that apply to only parallel languages and programming models.