centos7
hadoop2.6.0

一、修改主机名 (三台同时配置)
hostnamectl set-hostname hadoop1
hostnamectl set-hostname hadoop2
hostnamectl set-hostname hadoop3

二、修改hosts (三台同时配置)
vim /etc/hosts
键入 :
192.168.200.6 hadoop1
192.168.200.7 hadoop2
192.168.200.8 hadoop3

三、 关闭防火墙 (三台同时配置)
systemctl stop firewalld
关闭后查看防火墙状态:systemctl status firewalld
Active: inactive (dead) 表示防火墙已关闭

四、 安装JDK、Hadoop (仅hadoop1安装,其余机器配置免密登陆后分发)
mkdir hadoop
XFTP上传 hadoop-2.6.0.tar.gz、jdk-8u261-linux-x64.tar.gz至目录下
[root@hadoop1 hadoop]# tar -zxf /hadoop/hadoop-2.6.0.tar.gz -C /hadoop
[root@hadoop1 hadoop]# tar -zxf /hadoop/jdk-8u261-linux-x64.tar.gz -C /hadoop

五、 配置环境变量 (三台同时配置 hadoop1执行 source /etc/profile 、2和3不执行《分发文件后执行》 )
[root@hadoop1 hadoop]#vi /etc/profile
插入

export JAVA_HOME="/hadoop/jdk1.8.0_261"
export HADOOP_HOME="/hadoop/hadoop-2.6.0"
export PATH=$PATH:${HADOOP_HOME}/bin
export PATH=$JAVA_HOME/bin:$PATH

六、配置免密登录(生成公钥) (三台都执行)

[root@wangyikun1 /]# ssh-keygen -t rsa

一路回车、都设置为默认值

七、分发公钥 (三台都执行)

[root@hadoop1 /]# ssh-copy-id hadoop1 
[root@hadoop1 /]# ssh-copy-id hadoop2 
[root@hadoop1 /]# ssh-copy-id hadoop3 

三台机器同时都生成公钥并分发

八、vi /hadoop/hadoop-2.6.0/etc/hadoop/core-site.xml (仅在hadoop1执行)

  •  <property>
             <name>hadoop.tmp.dir</name>
             <value>/hadoop/tmp</value>
             <description>Abase for other temporary directories.</description>
     </property>
     <property>
             <name>fs.defaultFS</name>
             <value>hdfs://hadoop1:9000</value>
     </property>
     <property>
             <name>io.file.buffer.size</name>
             <value>4096</value>
     </property>
    

九、 vi /hadoop/hadoop-2.6.0/etc/hadoop/hdfs-site.xml (仅在hadoop1执行)

  •  <property>
             <name>dfs.replication</name>
             <value>2</value>
             <description>nodes total count</description>
     </property>
    

十、cp /hadoop/hadoop-2.6.0/etc/hadoop/mapred-site.xml.template /hadoop/hadoop-2.6.0/etc/hadoop/mapred-site.xml
vi /hadoop/hadoop-2.6.0/etc/hadoop/mapred-site.xml (仅在hadoop1执行)

  •  <property>
     		<name>mapreduce.framework.name</name>
     		<value>yarn</value>
     		<final>true</final>
     </property>
     <property>
     	     <name>mapreduce.jobtracker.http.address</name>
    		 <value>hadoop1:50030</value>
     </property>
     <property>
     		<name>mapreduce.jobhistory.address</name>
     	    <value>hadoop1:10020</value>
     </property>
     <property>
     		<name>mapreduce.jobhistory.webapp.address</name>
     		<value>hadoop1:19888</value>
     </property>
     <property>
      		<name>mapred.job.tracker</name>
      		<value>http://hadoop1:9001</value>
     </property>
    

十一、vi /hadoop/hadoop-2.6.0/etc/hadoop/yarn-site.xml (仅在hadoop1执行)

  •   <property>
      		<name>yarn.resourcemanager.hostname</name>
      		<value>hadoop1</value>
      </property>
      <property>
     		<name>yarn.nodemanager.aux-services</name>
     		<value>mapreduce_shuffle</value>
      </property>
      <property>
     		<name>yarn.resourcemanager.address</name>
     		<value>hadoop1:8032</value>
      </property>
      <property>
     		<name>yarn.resourcemanager.scheduler.address</name>
     		<value>hadoop1:8030</value>
      </property>
      <property>
     		<name>yarn.resourcemanager.resource-tracker.address</name>
     		<value>hadoop1:8031</value>
      </property>
      <property>
     		<name>yarn.resourcemanager.admin.address</name>
     		<value>hadoop1:8033</value>
      </property>
      <property>
     		<name>yarn.resourcemanager.webapp.address</name>
     		<value>hadoop1:8088</value>
      </property>
    

十二、vi /hadoop/hadoop-2.6.0/etc/hadoop/slaves (仅在hadoop1执行)

hadoop1
hadoop2
hadoop3

十三、vi /hadoop/hadoop-2.6.0/etc/hadoop/hadoop-env.sh (仅在hadoop1执行)

export JAVA_HOME=/hadoop/jdk1.8.0_261

十四、分发hadoop (仅在hadoop1执行)

[root@hadoop1 hadoop-2.6.0]$ scp -r /hadoop/hadoop-2.6.0/ hadoop2:/hadoop/hadoop-2.6.0
[root@hadoop1 hadoop-2.6.0]$ scp -r /hadoop/hadoop-2.6.0/ hadoop3:/hadoop/hadoop-2.6.0

十五、分发JDK(分发完JDK后,2和3机 执行source /etc/profile )

 [root@hadoop1 hadoop-2.6.0]$ scp -r /hadoop/jdk1.8.0_261/ hadoop2:/hadoop/jdk1.8.0_261
 [root@hadoop1 hadoop-2.6.0]$ scp -r /hadoop/jdk1.8.0_261/ hadoop3:/hadoop/jdk1.8.0_261

十六、格式化(只在hadoop1)
cd /hadoop/hadoop-2.6.0/bin/
./hadoop namenode -format

十七、启动(只在hadoop1)
cd /hadoop/hadoop-2.6.0/sbin
./start-all.sh
几个yes yes yes yes

十八、查看各主机进程

hadoop1:

  • 4080 Jps
    3649 SecondaryNameNode
    3462 NameNode
    3798 ResourceManager

hadoop2:

  • 3173 DataNode
    3269 NodeManager
    3389 Jps

hadoop3:

  • 3062 DataNode
    3270 Jps
    3149 NodeManager
Logo

更多推荐