Saint's Log


Exploring Apache’s Hadoop MapReduce Tutorial

In the last post, I described the straightforward process of setting up and Ubuntu VM in which to run Hadoop. Once you can successfully run the Hadoop MapReduce example in the MapReduce Tutorial, you may be interested in examining the source code using an IDE like Eclipse. To do so, install eclipse:

sudo apt-get install eclipse-platform

Some common Eclipse settings to adjust:

  1. Show line numbers (Window > Preferences > General > Editors > Text Editors > Show Line Numbers
  2. To make Eclipse use spaces instead of tabs (or vice versa), see this StackOverflow question.
  3. To auto-remove trailing whitespace in Eclipse, see this StackOverflow question.

To generate an Eclipse project for the Hadoop source code, the src/BUILDING.txt file lists these steps (which we cannot yet run):

cd ~/hadoop-2.7.4/src/hadoop-maven-pluggins
mvn install
cd ..
mvn eclipse:eclipse -DskipTests

To be able to run these commands, we need to install the packages required for building Hadoop. They are also listed in the src/BUILDING.txt file. For the VM we set up, we do not need to install the packages listed under Oracle JDK 1.7. Instead, run these commands to install Maven, native libraries, and ProtocolBuffer:

sudo apt-get -y install maven
sudo apt-get -y install build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev
sudo apt-get -y install libprotobuf-dev protobuf-compiler

Now here's where things get interesting. The last command installs version 2.6.1 of the ProtocolBuffer. The src/BUILDING.txt file states that version 2.5.0 is required. Turns out they aren't kidding - if you try generating the Eclipse project using version 2.6.1 (or some non 2.5.0 version), you'll get an error similar to this one:

As suggested here and here, you can check the version by typing:

protoc --version

How do we install 2.5.0? Turns out we have to build ProtocolBuffer 2.5.0 from the source code ourselves but we need to grab the sources from Github now (unlike those now outdated instructions):

mkdir ~/protobuf
cd ~/protobuf
tar xvzf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0

Now follow the instructions in the README.txt file to build the source code.

./configure --prefix=/usr
make check
sudo make install
protoc --version

The output from the last command should now be "libprotoc 2.5.0". Note: you most likely need to pass the --prefix option to ./configure to avoid errors like the one below.

Now we can finally generate the Eclipse project files for the Hadoop sources.

cd ~/hadoop-2.7.4/src/hadoop-maven-plugins
mvn install
cd ..
mvn eclipse:eclipse -DskipTests

Once project-file generation is complete:

  1. Type eclipse to launch the IDE.
  2. Go to the File > Import... menu option.
  3. Select the Existing Projects into Workspace option under General.
  4. Browse to the ~/hadoop-2.7.4/src folder in the Select root directory: input. A list of the projects in the src folder should be displayed.
  5. Click Finish to import the projects.

You should now be able to navigate to the file and inspect the various Hadoop classes.

Filed under: Big Data No Comments

Setting up Apache Hadoop

As part of my Dynamic Big Data course, I have to set up a distributed file system to experiment with various mapreduce concepts. Let's use Hadoop since it's widely adopted. Thankfully, there are instructions on how to set up Apache Hadoop - we're starting with a single cluster for now.

Once installation is complete, log onto the Ubuntu OS. Set up shared folders and enable the bidirectional clipboard as follows:

  1. From the VirtualBox Devices menu, choose Insert Guest Additions CD image... A prompt will be displayed stating that "VBOXADDITIONS_5.1.26_117224" contains software intended to be automatically started. Just click on the Run button to continue and enter the root password. When the guest additions installer completes, press Return to close the window when prompted.
  2. From the VirtualBox Devices menu, choose Shared Clipboard > Bidirectional. This enables two way clipboard functionality between the guest and host.
  3. From the VirtualBox Devices menu, choose Shared Folders > Shared Folders Settings... Click on the add Shared Folder button and enter a path to a folder on the host that you would like to be shared. Optionally select Auto-mount and Make Permanent.
  4. Open a terminal window. Enter these commands to mount the shared folder (assuming you named it vmshare in step 3 above):
mkdir ~/vmshare
sudo mount -t vboxsf -o uid=$UID,gid=$(id -g) vmshare ~/vmshare

To start installing the software we need, enter these commands:

sudo apt-get update
sudo apt install default-jdk

Next, get a copy of the Hadoop binaries from an Apache download mirror.

cd ~/Downloads
mkdir ~/hadoop-2.7.4
tar xvzf hadoop-2.7.4.tar.gz -C ~/
cd ~/hadoop-2.7.4

The Apache Single Node Cluster Tutorial says to

export JAVA_HOME=/usr/java/latest

in the etc/hadoop/ script. On this Ubuntu setup, we end up needing to

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/

If you skip setting up this export, running bin/hadoop will give this error:

Error: JAVA_HOME is not set and could not be found.

Note: I found that setting JAVA_HOME=/usr caused subsequent processes (like generating Eclipse projects from the source using mvn) to fail even though the steps in the tutorial worked just fine.

To verify that Hadoop is now configured and ready to run (in a non-distributed mode as a single Java process), execute the commands listed in the tutorial.

$ mkdir input
$ cp etc/hadoop/*.xml input
$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar grep input output 'dfs[a-z.]+'
$ cat output/*

The bin/hadoop jar command runs the code in the .jar file, specifically the code in, passing it the last 3 arguments. The output should resemble this summary:

If you're interested in the details of this example (e.g. to inspect, examine the src subfolder. If you don't need the binaries and just want to look at the code, you can wget it from a download mirror, e.g.:

Filed under: Big Data No Comments