基于CentOS7的hue部署
2021/6/26 7:28:41
本文主要是介绍基于CentOS7的hue部署,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
准备工作
1、 安装python
2、 安装maven
3、应用类服务一般用专有账号启动,我们建立一个hue用户和用户组
groupadd hadoop
useradd -g hadoop hue
安装hue依赖的第三方软件包
yum -y install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi gcc gcc-c++ krb5-devel libtidy libxml2-devel libxslt-devel openldap-devel python-devel sqlite-devel openssl-devel mysql-devel gmp-devel
下载hue源码
下载方式一:直接去官网下载
http://gethue.com
下载方式二:通过git下载
git clone https://github.com/cloudera/hue.git branch-4.4
这里采用git下载
mv branch-4.4 hue
将hue及其内部文件所属用户设置为hue,所属用户组设置为hadoop,然后切换hue用户下
chown -R hue:hadoop hue
su hue
编译
cd hue
make apps
配置
修改/usr/local/hue/desktop/conf/pseudo-distributed.ini
基础配置
[desktop]
# 安全秘钥,存储session的加密处理
secret_key=dfsahjfhflsajdhfljahl
# Time zone name
time_zone=Asia/Shanghai
# Enable or disable debug mode.
django_debug_mode=false
# Enable or disable backtrace for server error
http_500_debug_mode=false
# This should be the hadoop cluster admin
## default_hdfs_superuser=hdfs
default_hdfs_superuser=root
# 不启用的模块
# app_blacklist=impala,security,rdbms,jobsub,pig,hbase,sqoop,zookeeper,metastore,indexer
配置数据库
[[database]]
# 数据库引擎类型
engine=mysql
# 数据库主机地址
host=10.62.124.43
# 数据库端口
port=3306
# 数据库用户名
user=root
# 数据库密码
password=xhw888
# 数据库库名
name=hue
hue集成hadoop3.1.0配置
配置hadoop集群
配置etc/Hadoop/core-site.xml
<!-- Hue hue user. Start -->
<property>
<name>hadoop.proxyuser.hue.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hue.groups</name>
<value>*</value>
</property>
<!-- Hue hue user. End -->
配置好后,重启dfs使配置生效
配置hue
修改desktop/conf/pseudo-distributed.ini文件,先找到[[hdfs_clusters]]这个标签,修改如下:
[hadoop]
# Configuration for HDFS NameNode
# ------------------------------------------------------------------------
[[hdfs_clusters]]
# HA support by using HttpFs
[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://hadoopSvr1:8020
# NameNode logical name.
logical_name=hadoopSvr1
# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
## webhdfs_url=http://localhost:50070/webhdfs/v1
webhdfs_url=http://hadoopSvr1:9870/webhdfs/v1
# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false
# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True
# Directory of the Hadoop configuration
## hadoop_conf_dir=$HADOOP_CONF_DIR when set or '/etc/hadoop/conf'
hadoop_conf_dir=$HADOOP_CONF_DIR
找到[[yarn_clusters]]这个标签,修改如下:
# Configuration for YARN (MR2)
# ------------------------------------------------------------------------
[[yarn_clusters]]
[[[default]]]
# Enter the host on which you are running the ResourceManager
## resourcemanager_host=localhost
resourcemanager_host=hadoopSvr3
# The port where the ResourceManager IPC listens on
## resourcemanager_port=8032
# Whether to submit jobs to this cluster
submit_to=True
# Resource Manager logical name (required for HA)
## logical_name=
# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false
# URL of the ResourceManager API
## resourcemanager_api_url=http://localhost:8088
resourcemanager_api_url=http://hadoopSvr3:8088
# URL of the ProxyServer API
## proxy_api_url=http://localhost:8088
proxy_api_url=http://hadoopSvr3:8088
# URL of the HistoryServer API
## history_server_api_url=http://localhost:19888
history_server_api_url=http://hadoopSvr4:19888
# URL of the Spark History Server
## spark_history_server_url=http://localhost:18088
spark_history_server_url=http://hadoopSvr1:18080
# Change this if your Spark History Server is Kerberos-secured
## spark_history_server_security_enabled=false
# In secure mode (HTTPS), if SSL certificates from YARN Rest APIs
# have to be verified against certificate authority
## ssl_cert_ca_verify=True
hue集成hive配置
hue主要用于hive的交互式查询,在hue所在服务器新建/usr/local/hue/hive/conf目录,
把hiveService2的hive-site.xml配置文件复制到该目录下。
修改desktop/conf/pseudo-distributed.ini文件,先找到[beeswax]这个标签,修改如下:
[beeswax]
# hiveServer2 服务地址(填主机名,kerberos要用)
hive_server_host=hadoopSvr3
# hiveServer2服务端口
hive_server_port=10000
# hiveServer2 hive-site.xml配置文件存放位置
hive_conf_dir=/usr/local/hue/hive/conf
hue集成spark配置
启动spark的thrift server
cd /usr/local/spark/sbin/
start-thriftserver.sh --master yarn --deploy-mode client
安装livy
下载livy安装包
下载地址:http://livy.incubator.apache.org/download/
解压zip包
unzip apache-livy-0.6.0-incubating-bin.zip
mv apache-livy-0.6.0-incubating-bin livy-0.6.0
配置livy
cd livy-0.6.0/conf/
配置livy-env.sh
cp livy-env.sh.template livy-env.sh
新建livy的log目录,并在livy-env.sh文件中加入以下内容
export HADOOP_CONF_DIR=/usr/local/hadoop-3.1.0/etc/hadoop
export SPARK_HOME=/usr/local/spark
export LIVY_LOG_DIR=/data/livy/logs
export LIVY_PID_DIR=/data/livy/pid
配置livy.conf
cp livy.conf.template livy.conf
在livy.conf文件中加入以下内容
# What port to start the server on.
livy.server.port = 8998
# What spark master Livy sessions should use.
livy.spark.master = yarn
# What spark deploy mode Livy sessions should use.
livy.spark.deploy-mode = client
启动livy
/usr/local/livy-0.6.0/bin/livy-server start
配置hue
修改desktop/conf/pseudo-distributed.ini文件,先找到[spark]这个标签,修改如下:
###########################################################################
# Settings to configure the Spark application.
###########################################################################
[spark]
# The Livy Server URL.
## livy_server_url=http://localhost:8998
livy_server_url=http://hadoopSvr3:8998
# Configure Livy to start in local 'process' mode, or 'yarn' workers.
## livy_server_session_kind=yarn
livy_server_session_kind=yarn
# Whether Livy requires client to perform Kerberos authentication.
## security_enabled=false
# Whether Livy requires client to use csrf protection.
## csrf_enabled=false
# Host of the Sql Server
## sql_server_host=localhost
sql_server_host=hadoopSvr1
# Port of the Sql Server
## sql_server_port=10000
sql_server_port=10000
# Choose whether Hue should validate certificates received from the server.
## ssl_cert_ca_verify=true
###########################################################################
MySql初始化
在mysql数据库上建一个名为hue的库
# 登录mysql数据库
mysql -u root -p
# 创建数据库hue
create database hue;
# 创建用户
create user 'hue'@'%' identified by 'xhw888';
# 授权
grant all privileges on hue.* to 'hue'@'%';
flush privileges;
然后执行如下命令
build/env/bin/hue syncdb
build/env/bin/hue migrate
执行完以后,可以在mysql中看到,hue相应的表已经生成。
启动hue
build/env/bin/supervisor &
停止hue
一般情况下,直接使用Ctrl + c来停止hue服务
如果将hue在后台运行的话,可以使用kill命令:
ps -ef | grep hue | grep -v grep | awk '{print $2}' | xargs kill -9
验证
服务启起来后,默认服务端口为8000
http://10.62.124.44:8000
验证hue集成spark配置是否正确
登录hue后台,打开scala编辑页,执行以下scala代码
var counter = 0
val data = Array(1, 2, 3, 4, 5)
var rdd = sc.parallelize(data)
// Wrong: Don't do this!!
rdd.map(x=>x+1).collect()
若出现如下结果,则证明集成成功
counter: Int = 0
data: Array[Int] = Array(1, 2, 3, 4, 5)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:27
res2: Array[Int] = Array(2, 3, 4, 5, 6)
hue管理账号
用户名:hue
密码:xhw888
遇到的问题
mysql初始化过程中报错
django.db.utils.OperationalError: (2059, "Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib64/mysql/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory")
解决方案
登录mysql修改密码加密插件
mysql> alter user 'hue'@'10.62.124.44' identified with mysql_native_password by 'xhw888';
————————————————
版权声明:本文为CSDN博主「逝水-无痕」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/wangkai_123456/article/details/90180471
这篇关于基于CentOS7的hue部署的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-26MATLAB 中 A(7)=[];什么意思?-icode9专业技术文章分享
- 2024-11-26UniApp 中如何实现使用输入法时保持页面列表不动的效果?-icode9专业技术文章分享
- 2024-11-26在 UniApp 中怎么实现输入法弹出时禁止页面向上滚动?-icode9专业技术文章分享
- 2024-11-26WebSocket是什么,怎么使用?-icode9专业技术文章分享
- 2024-11-26页面有多个ref 要动态传入怎么实现?-icode9专业技术文章分享
- 2024-11-26在 UniApp 中实现一个底部输入框的常见方法有哪些?-icode9专业技术文章分享
- 2024-11-26RocketMQ入门指南:搭建与使用全流程详解
- 2024-11-26RocketMQ入门教程:轻松搭建与使用指南
- 2024-11-26手写RocketMQ:从入门到实践的简单教程
- 2024-11-25【机器学习(二)】分类和回归任务-决策树(Decision Tree,DT)算法-Sentosa_DSML社区版