一、hadoop组件服务管理命令

  • 启动名称节点
bin/hadoop-daemon.sh start namenode

 

  • 启动数据节点
bin/hadoop-daemons.sh start datanode

 

  • 启动secondarynamenode
bin/hadoop-daemon.sh start secondarynamenode

 

  • 启动resourcemanager
bin/yarn-daemon.sh start resourcemanager

 

  • 启动nodemanager
bin/yarn-daemons.sh start nodemanager

 

  • 停止数据节点
bin/hadoop-daemons.sh stop datanode

 

二、hadoop组件日常操作命令

  一)HDFS日常操作命令

  https://www.hadoopdoc.com/hdfs/hdfs-shell

  查看命令操作的帮助信息

  hdfs dfs 命令等价于hadoop fs

# hdfs dfs 
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
Usage: hadoop fs [generic options]
    [-appendToFile <localsrc> ... <dst>]
    [-cat [-ignoreCrc] <src> ...]
    [-checksum [-v] <src> ...]
    [-chgrp [-R] GROUP PATH...]
    [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
    [-chown [-R] [OWNER][:[GROUP]] PATH...]
    [-concat <target path> <src path> <src path> ...]
    [-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
    [-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
    [-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] [-s] <path> ...]
    [-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
    [-createSnapshot <snapshotDir> [<snapshotName>]]
    [-deleteSnapshot <snapshotDir> <snapshotName>]
    [-df [-h] [<path> ...]]
    [-du [-s] [-h] [-v] [-x] <path> ...]
    [-expunge [-immediate] [-fs <path>]]
    [-find <path> ... <expression> ...]
    [-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
    [-getfacl [-R] <path>]
    [-getfattr [-R] {-n name | -d} [-e en] <path>]
    [-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
    [-head <file>]
    [-help [cmd ...]]
    [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
    [-mkdir [-p] <path> ...]
    [-moveFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
    [-moveToLocal <src> <localdst>]
    [-mv <src> ... <dst>]
    [-put [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
    [-renameSnapshot <snapshotDir> <oldName> <newName>]
    [-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
    [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
    [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
    [-setfattr {-n name [-v value] | -x name} <path>]
    [-setrep [-R] [-w] <rep> <path> ...]
    [-stat [format] <path> ...]
    [-tail [-f] [-s <sleep interval>] <file>]
    [-test -[defswrz] <path>]
    [-text [-ignoreCrc] <src> ...]
    [-touch [-a] [-m] [-t TIMESTAMP (yyyyMMdd:HHmmss) ] [-c] <path> ...]
    [-touchz <path> ...]
    [-truncate [-w] <length> <path> ...]
    [-usage [cmd ...]]

Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

  大多数命令和Linux的命令是类似的

   1、hadoop hdfs文件给其他账号授权

  文件账号授权需求

  安装调度中心dolphinscheduler,需创建独立的目录(存储空间);调度平台集成hive也需独立创建

  hdfs上的目录与文件对应各自的操作账号,很多时候需要对另外的账号授予权限。这个时候可以使用setfacl命令来完成相应的需求

  查看目录当前权限:getfacl命令
# 查看当前hdfs里面 / 下有的子目录
hdfs dfs -ls /

# 创建自己需要的目录
hdfs dfs -mkdir -p /dolphinscheduler

# 查看创建(存在)目录的权限
hdfs dfs -getfacl /dolphinscheduler
# file: /dolphinscheduler
# owner: root
# group: supergroup
user::rwx
group::r-x
mask::rwx
other::r-x
此时filepath只有名为root的owner有权限。
  给其他用户进行授权:setfacl命令
hdfs dfs -setfacl -R -m user:dolphinscheduler:rwx /dolphinscheduler
  返回结果如下
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
# file: /dolphinscheduler
# owner: root
# group: supergroup
user::rwx
user:dolphinscheduler:rwx
group::r-x
mask::rwx
other::r-x

 

版权声明:本文为思维无界限原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/happy-king/p/16409466.html