查看“Hadoop install”的源代码
←
Hadoop install
跳到导航
跳到搜索
因为以下原因,您没有权限编辑本页:
您请求的操作仅限属于该用户组的用户执行:
用户
您可以查看和复制此页面的源代码。
=== ENV === ==== USER ==== groupadd hadoop -g 1001 useradd hdfs -g hadoop -u 1001 ==== Java ==== ln -s /usr/java/jdk1.8.0_361/jre/bin/java /usr/bin/java <nowiki>#</nowiki> Java Error java 以前使用 --version 来查看版本的;java8 以后变更为 -version。 $ java --version Unrecognized option: --version Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. $ java -version java version "1.8.0_361" Java(TM) SE Runtime Environment (build 1.8.0_361-b09) Java HotSpot(TM) 64-Bit Server VM (build 25.361-b09, mixed mode) ====hadoop==== ln -s /opt/hadoop-3.3.0 /opt/hadoop ====profile==== <nowiki>#</nowiki> Java, 20201010, Adam export JAVA_HOME=/usr/java/jdk1.8.0_361 export PATH=$PATH:$JAVA_HOME/bin <nowiki>#</nowiki> hadoop, 20201010, Adam export HADOOP_HOME=/opt/hadoop export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop ===Hadoop 配置 === ====配置 Hadoop 环境脚本文件中的 JAVA_HOME 参数==== <nowiki>#</nowiki> hadoop是守护线程 读取不到 /etc/profile 里面配置的JAVA_HOME路径 <nowiki>#</nowiki> /opt/hadoop/etc/hadoop/ <nowiki>#</nowiki> hadoop-env.sh, mapred-env.sh, yarn-env.sh cp hadoop-env.sh hadoop-env.sh.20210409 cp mapred-env.sh mapred-env.sh.20210409 cp yarn-env.sh yarn-env.sh.20210409 echo ' <nowiki>#</nowiki> hdfs, 20210409, Adam export JAVA_HOME=/usr/java/jdk1.8.0_361' >> ====Setup==== =====core-site.xml (Common组件)===== <configuration> <property> <!-- 配置hdfs地址 --> <name>fs.defaultFS</name> <value>hdfs://'''g2-hdfs-01:9000'''</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <!-- 保存临时文件目录 --> <name>hadoop.tmp.dir</name> <value>/u01/hdfs/tmp</value> </property> </configuration> =====hdfs-site.xml (HDFS组件)===== *修改 dfs.replication 不影响历史文件的备份数。修改历史文件备份数:hadoop fs -setrep -R 3 <path> <configuration> <property> <!-- 主节点地址 --> <name>dfs.namenode.http-address</name> <value>'''g2-hdfs-01:9870'''</value> </property> <property> <!-- 第二节点地址 --> <name>dfs.namenode.secondary.http-address</name> <value>'''g2-hdfs-02:9870'''</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/u01/hdfs/dfs/nn</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/u01/hdfs/dfs/dn</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <!-- 配置false后,无需权限即可生成dfs上的文件 --> <name>dfs.permissions</name> <value>false</value> </property> <property> <!-- 在文件中出现的主机会被下线 --> <name>dfs.hosts.exclude</name> <value>/opt/hadoop/etc/hadoop/workers.exclude</value> </property> </configuration> -- del <property> <!-- 备份数为默认值3 --> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.blocksize</name> <value>268435456</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>100</value> </property> =====mapred-site.xml===== <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> -- del <property> <name>mapreduce.jobhistory.address</name> <value>g2-hdfs-01:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>g2-hdfs-01:19888</value> </property> <property> <name>mapreduce.application.classpath</name> <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/ hadoop/mapreduce/lib/*</value> </property> =====yarn-site.xml===== <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>'''g2-hdfs-01'''</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>'''g2-hdfs-01:8088'''</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>32768</value> </property> <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property> <property> <name>yarn.nodemanager.env-whitelist</name> <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPE ND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> </property> </configuration> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop01/192.168.44.5:8088</value> <description>配置外网只需要替换外网ip为真实ip,否则默认为 localhost:8088</description> </property> yarn.resourcemanager.hostname 指定yarn的ResourceManager管理界面的地址,不配的话,Active Node始终为0 yarn.scheduler.maximum-allocation-mb 每个节点可用内存,单位MB,默认8182MB yarn.nodemanager.aux-services reducer获取数据的方式 yarn.nodemanager.vmem-check-enabled false = 忽略虚拟内存的检查 =====workers===== g2-hdfs-01 g2-hdfs-02 .. ==== Node ==== ===== Status ===== hdfs dfsadmin -report yarn node -list ===== Add ===== 新增节点上启动 dfs & yarn 服务后,namenode 节点: * 可看到新增节点 * workers 需增加新增节点主机名称,否则下次 start-dfs.sh 会漏掉该节点 =====Exclude===== 若在 hdfs-site.xml 中配置了 dfs.hosts.exclude,则可以动态下线主机。 hdfs dfsadmin -refreshNodes 注意:hdfs-site.xml 内容修改,需重启 dfs 生效。但配置了 dfs.hosts.exclude,修改相应文件后 refresh node 即可生效。 ===INIT=== <nowiki>#</nowiki> Path mkdir -p /u01/hdfs/dfs/nn mkdir -p /u01/hdfs/dfs/dn mkdir -p /u01/hdfs/tmp # chown chown -R hdfs:hadoop /opt/hadoop* chown -R hdfs:hadoop /u01/hdfs # format nn /opt/hadoop/bin/hdfs namenode -format ===Start=== ## Start : hdfs(发现需要用 hdfs 停服务,root不可停) /opt/hadoop/sbin/start-dfs.sh /opt/hadoop/sbin/start-yarn.sh /opt/hadoop/sbin/stop-dfs.sh /opt/hadoop/sbin/stop-yarn.sh http://mc0:9870 # hdfs http://mc0:8088 # yarn ## 单节点启停 # /opt/hadoop/bin hdfs --daemon start datanode hdfs --daemon start namenode yarn --daemon start resourcemanager yarn --daemon start nodemanager ===SYNC === # HN=$1 HN=g2-hdfs-02 # hosts ssh ${HN} "hostnamectl set-hostname ${HN}" scp /etc/hosts ${HN}:/etc/hosts # Java scp /u01/soft/jdk_180361.tar.gz ${HN}:/tmp/ ssh ${HN} "cd /tmp;tar -xzvf jdk_180361.tar.gz;mkdir /usr/java/;mv jdk1.8.0_361 /usr/java/;" ssh ${HN} "ln -s /usr/java/jdk1.8.0_361/jre/bin/java /usr/bin/java" # Hadoop ssh ${HN} "groupadd hadoop -g 1001;useradd hdfs -m -s /bin/bash -g hadoop -u 1001" scp /home/hdfs/.ssh/authorized_keys ${HN}:/tmp/ ssh ${HN} "mkdir /home/hdfs/.ssh;mv /tmp/authorized_keys /home/hdfs/.ssh/;chown -R hdfs:hadoop /home/hdfs/.ssh" scp /u01/soft/hadoop_334.tar.gz ${HN}:/tmp/ ssh ${HN} "cd /tmp;tar -xzvf hadoop_334.tar.gz;mv hadoop-3.3.4 /opt/;ln -s /opt/hadoop-3.3.4 /opt/hadoop;chown -R hdfs:hadoop /opt/hadoop*" ssh ${HN} "mkdir -p /u01/hdfs/dfs/nn /u01/hdfs/dfs/dn /u01/hdfs/tmp;chown -R hdfs:hadoop /u01/hdfs" ssh hdfs@${HN} "/opt/hadoop/bin/hdfs namenode -format" ===Error=== *ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.java.lang.IllegalArgumentException: Does not contain a valid host:port authority: esxi_mc0:9000 主机名称不合法,不允许包含‘.’ ‘/’ '_'等非法字符 *WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Unable to get NameNode addresses 检查 core-site.xml 中 fs.defaultFS 配置 * Hadoop 退出安全模式 hdfs dfsadmin -safemode leave [[分类:Develop]] [[分类:Hadoop]]
返回
Hadoop install
。
导航菜单
个人工具
登录
命名空间
页面
讨论
大陆简体
查看
阅读
查看源代码
查看历史
更多
搜索
导航
首页
最近更改
随机页面
目录
文章分类
侧边栏
帮助
工具
链入页面
相关更改
特殊页面
页面信息