Not logged in. Login

Hadoop on a VM

You can set up a virtual Hadoop cluster running in a virtual machine. You may find it easier to test and experiment with this setup than using our Cluster. Of course, this won't be a fast way to process multiple terabytes of data, but it will be enough to test your code on small data sets.

The instructions here use the Cascading Hadoop Cluster (forked from the original to add Spark and Hive support) to get things running.

The single-node setup will require about 2GB of RAM, and the four-node about 4GB (but see below for details). If you have a computer that can allocate that (and still run your OS and web browser and whatever else), then this is a good solution for you.


  1. Install VirtualBox. In Ubuntu, this can by done by installing the package virtualbox-qt.
  2. Install Vagrant. In Ubuntu, install the vagrant package.
  3. Get the virtual cluster configuration code: git clone
  4. If you want the single-node version of the “cluster”, change to the single-node directory. If your computer can handle the four-node version, stay in the repository root directory.
  5. See “Customizing your VM” below: there may be some ways you want to customize your Vagrantfile.
  6. Start the cluster: vagrant up. This will take some time on the first run (maybe 45 minutes) and download a bunch of packages. (i.e. do it when you're plugged in and on a decent network, not tethered to your phone on the bus.)

Customizing your VM

In the Vagrantfile (in the repo root for the four-node configuration, or in single-node for the one-node version), you can set the CPU and memory given to each VM to something reasonable. If you're using the multi-node setup, remember that you are going to be hosting four VMs with these specs. [I have had bad luck with less than 1024MB of memory in the multi-node setup, or 2048MB for the single-node.]

You can also add a shared folder so the code you're working on (on your actual computer) is available in the VM. Inside the master node config add a synced_folder line like this:

config.vm.define :master, primary: true do |master|
  config.vm.synced_folder "/home/me/CMPT732", "/home/vagrant/CMPT732"

Starting your “cluster”

First get the VM(s) running and SSH in:

vagrant up
vagrant ssh

The first time you start the cluster, you need to initialize the filesystem:


Then in the VM, start the Hadoop cluster:

sudo # if you need HBase running

You can access the web frontends for the cluster at these URLs:

Stopping your “cluster”

Inside the master node:

sudo # if you started HBase

And then exit the SSH session and shut down the nodes:

vagrant halt
Updated Mon Aug. 29 2022, 10:52 by ggbaker.