Log On/Register  

855.838.5028

Cloudera Developer Training for Apache Hadoop

Duration: 4 Days
Course Price: $2,995

Course Overview

Learn iT!’s four-day developer training course delivers the key concepts and expertise participants need to create robust data processing applications using Apache Hadoop. From workflow implementation and working with APIs through writing MapReduce code and executing joins, Cloudera’s training course is the best preparation for the real-world challenges faced by Hadoop developers.

Hands-On Hadoop

Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the Hadoop ecosystem, learning topics such as:

The internals of MapReduce and HDFS and how to write MapReduce code

Best practices for Hadoop development, debugging, and implementation of workflows and common algorithms

How to leverage Hive, Pig, Sqoop, Flume, Oozie, and other Hadoop ecosystem projects

Creating custom components such as WritableComparables and InputFormats to manage complex data types

Writing and executing joins to link data sets in MapReduce

Advanced Hadoop API topics required for real-world data analysis

Developer Certification
Upon completion of the course, attendees receive a Cloudera Certified Developer for Apache Hadoop (CCDH) practice test. Certification is a great differentiator; it helps establish you as a leader in the field, providing employers and customers with tangible evidence of your skills and expertise.

Course Overview

Learn iT!’s four-day developer training course delivers the key concepts and expertise participants need to create robust data processing applications using Apache Hadoop. From workflow implementation and working with APIs through writing MapReduce code and executing joins, Cloudera’s training course is the best preparation for the real-world challenges faced by Hadoop developers.

Hands-On Hadoop

Through instructor-led discussion and interactive, hands-on exercises, participants will navigate the Hadoop ecosystem, learning topics such as:

The internals of MapReduce and HDFS and how to write MapReduce code

Best practices for Hadoop development, debugging, and implementation of workflows and common algorithms

How to leverage Hive, Pig, Sqoop, Flume, Oozie, and other Hadoop ecosystem projects

Creating custom components such as WritableComparables and InputFormats to manage complex data types

Writing and executing joins to link data sets in MapReduce

Advanced Hadoop API topics required for real-world data analysis

Developer Certification
Upon completion of the course, attendees receive a Cloudera Certified Developer for Apache Hadoop (CCDH) practice test. Certification is a great differentiator; it helps establish you as a leader in the field, providing employers and customers with tangible evidence of your skills and expertise.

Audience & Prerequisites
This course is best suited to developers and engineers who have programming experience. Knowledge of Java is strongly recommended and is required to complete the hands-on exercises.

Introduction
 
The Motivation for Hadoop

Problems with Traditional Large-Scale Systems

IntroducingHadoop

Hadoopable Problems
 
Hadoop: Basic Concepts and HDFS

The Hadoop Project and Hadoop Components

The Hadoop Distributed File System
 
Introduction to MapReduce

MapReduce Overview

Example: WordCount

Mappers

Reducers
 
Hadoop Clusters and the Hadoop Ecosystem

Hadoop Cluster Overview

Hadoop Jobs and Tasks

Other Hadoop Ecosystem Components
 
Writing a MapReduce Program in Java

Basic MapReduce API Concepts

Writing MapReduce Drivers, Mappers, and Reducers in Java

Speeding Up Hadoop Development by Using Eclipse

Differences Between the Old and New MapReduce APIs
 
Writing a MapReduce Program Using Streaming

Writing Mappers and Reducers with the Streaming API

Unit Testing MapReduce Programs

Unit Testing

The JUnit and MRUnit Testing Frameworks

Writing Unit Tests with MRUnit

Running Unit Tests
 
Delving Deeper into the Hadoop API

Using the ToolRunner Class

Setting Up and Tearing Down Mappers and Reducers

Decreasing the Amount of Intermedi-ate Data with Combiners

Accessing HDFS Programmatically

Using The Distributed Cache

Using the Hadoop API’s Library of Mappers, Reducers, and Partitioners
 
Practical Development Tips and Techniques

Strategies for Debugging MapReduce Code

Testing MapReduce Code Locally by Using LocalJobRunner

Writing and Viewing Log Files

Retrieving Job Information with Counters

Reusing Objects

Creating Map-Only MapReduce Jobs
 
Partitioners and Reducers

How Partitioners and Reducers Work Together

Determining the Optimal Number of Reduc-ers for a Job

Writing Customer Partitioners
 
Data Input and Output

Creating Custom Writable and Writable-Comparable Implementations

Saving Binary Data Using SequenceFile and Avro Data Files

Issues to Consider When Using File Compression

Implementing Custom InputFormats and OutputFormats
 
Common MapReduce Algorithms

Sorting and Searching Large Data Sets

Indexing Data

Computing Term Frequency — Inverse Document Frequency

Calculating Word Co-Occurrence

Performing Secondary Sort
 
Joining Data Sets in MapReduce Jobs

Writing a Map-Side Join

Writing a Reduce-Side Join
 
Integrating Hadoop into the Enterprise Workflow

Integrating Hadoop into an Existing Enterprise

Loading Data from an RDBMS into HDFS by Using Sqoop

Managing Real-Time Data Using Flume

Accessing HDFS from Legacy Systems with FuseDFS and HttpFS
 
An Introduction to Hive, Imapala, and Pig 

The Motivation for Hive, Impala, and Pig

Hive Overview

Impala Overview

Pig Overview

Choosing Between Hive, Impala, and Pig
 
An Introduction to Oozie

Introduction to Oozie

Creating Oozie Workflows
 
Conclusion
Learn More
Please type the letters below so we know you are not a robot (upper or lower case):