Hadoop Fuse Installation and Configuration on Centos

What is Fuse?

FUSE permits you to write down a traditional user land application as a bridge for a conventional file system interface. The hadoop-hdfs-fuse package permits you to use your HDFS cluster as if it were a conventional file system on Linux. It’s assumed that you simply have a operating HDFS cluster and grasp the hostname and port that your NameNode exposes. The Hadoop fuse installation and configuration with Mounting HDFS, HDFS mount using fuse is done by following the below steps.

Step 1 :  Required Dependencies
Step 2 :  Download and Install FUSE
Step 3 :  Install RPM Packages
Step 4 :  Modify HDFS FUSE
Step 5 :  Check  HADOOP Services
Step 6 :  Create a Directory to Mount HADOOP
Step 7 :  Modify HDFS-MOUNT Script
Step 8 :  Create softlinks of LIBHDFS.SO
Step 9 :  Check Memory Details

To start Hadoop fuse installation and configuration follow the steps:

Step 1 :  Required Dependencies

Hadoop  single / multinode  Cluster (started mode)
jdk (preinstalled)
Fuse mount Installation and configuration guide has prepared on following platform and services.
Operating System        : CentOS release 6.4 (Final) 32bit
hadoop                  : hadoop-1.2.1
mysql-server            : 5.1.71

JDK           : java version “1.7.0_45″ 32bit (jdk-7u45-linux-i586.rpm)
fuse          : hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz
fuse RPMs     : fuse-libs-2.8.3-4.el6.i686,
————————- —-    fuse-2.8.3-4.el6.i686,
————————-       fuse-devel-2.8.3-4.el6.i686.

Step2 :  Download  and install fuse

login as hadoop user to a node in hadoop cluster (master / datanode)
Download  hdfs-fuse from following location

[hadoop@hadoop ~]#wget https://hdfs-fuse.googlecode.com/files/hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz

Extract  hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz

[hadoop@hadoop ~]#tar -zxvf hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz

Step 3 :  Install rpm packages

switch to root user to install following rpm packages

[hadoop@hadoop ~]#su – root

[root@hadoop ~]#yum install fuse*

[root@hadoop ~]#chmod +x /usr/bin/fusermount

Step 4 :   Modify hdfs fuse

After installation of rpm packages, switch  back to hadoop user

[root@hadoop ~]# su – hadoop

Modify hdfs fuse configuration / environmental variables

[hadoop@hadoop ~]$cd hdfs-fuse/conf/

Add following lines in hdfs-fuse.conf

[hadoop@hadoop conf]$vi hdfs-fuse.conf

export JAVA_HOME=/usr/java/jdk1.7.0_45               # JAVA HOME path export HADOOP_HOME=/home/hadoop/hadoop-1.2.1         # hadoop installation home path export FUSE_HOME=/home/hadoop                        #fuse installation path export HDFS_FUSE_HOME=/home/hadoop/hdfs-fuse         # fuse home path export HDFS_FUSE_CONF=/home/hadoop/hdfs-fuse/conf    # fuse configuration path LogDir /tmp LogLevel LOG_DEBUG Hostname                                 # hadoop master node IP Port 9099                                             # hadoop port number (you can modify as per your hadoop configuration) Save & Exit(wq!)

Step 5 :  Check hadoop services

[hadoop@hadoop conf]$cd ..

verify hadoop instance is running [hadoop@hadoop hdfs-fuse]$ jps

2643 TaskTracker
4704 Jps
2206 NameNode
2516 JobTracker
2432 SecondaryNameNode
2316 DataNode

Step 6 :  Create a directory to mount hadoop

create a folder to mount hadoop file system to it

[hadoop@hadoop hdfs-fuse]#mkdir /home/hadoop/hdfsmount

[hadoop@hadoop hdfs-fuse]# cd

[hadoop@hadoop ~]#pwd

Step 7 :  Modify hdfs-mount script

switch to hdfc fuse binary folder in order to run mount script.

[hadoop@hadoop ~]#cd hdfs-fuse/bin/

modify hdfs-mount script to show jvm path location and other environmental settings, in our installation guide this is the location for jvm

[hadoop@hadoop bin]$ vi hdfs-mount

JAVA_JVM_DIR=/usr/java/jdk1.7.0_45/jre/lib/i386/server export JAVA_HOME=/usr/java/jdk1.7.0_45 export HADOOP_HOME=/home/hadoop/hadoop-1.2.1 export FUSE_HOME=/home/hadoop export HDFS_FUSE_HOME=/home/hadoop/hdfs-fuse export HDFS_FUSE_CONF=/home/hadoop/hdfs-fuse/conf Save & Exit(wq!)

Step 8 :  Create softlinks of libhdfs.so

create softlinks of  libhdfs.so which is located in (/home/hadoop/hadoop-1.2.1/c++/Linux-i386-32/lib/libhdfs.so)

[root@hadoop ~]# cd /home/hadoop/hdfs-fuse/lib/

[root@hadoop lib]# ln -s /home/hadoop/hadoop-1.2.1/c++/Linux-i386-32/lib/libhdfs .

Mount HDFS file system to /home/hadoop/hdfsmount

[hadoop@hadoop bin]#./hdfs-mount /home/hadoop/hdfsmount


[hadoop@hadoop bin]$./hdfs-mount -d /home/hadoop/hdfsmount   (-d option  to enable debug)

Step 9 :   Check memory details

[hadoop@hadoop bin]# df -h

Filesystem                                          Size  Used Avail Use% Mounted on
/dev/mapper/vg_hadoop-lv_root                       50G  1.4G   46G   3% /
tmpfs                                               504M     0  504M   0% /dev/shm
/dev/sda1                                           485M   30M  430M   7% /boot
/dev/mapper/vg_hadoop-lv_home                       29G  1.2G   27G   5% /home
hdfs-fuse                                           768M   64M  704M   9% /home/hadoop/hdfsmount

[hadoop@hadoop bin]$ ls /home/hadoop/hdfsmount/

tmp  user

use below “fusermount” command to unmount hadoop file system

[hadoop@hadoop bin]$fusermount -u /home/hadoop/hdfsmount

fuse mount is ready to use as local file system!

For more Detail you can watch Video :


Both comments and pings are currently closed.

Comments are closed.

Copyright ©Solutions@Experts.com
Copyright © NewWpThemes Techmark Solutions - www.techmarksolutions.co.uk