1.1 Linux服务器准备
1.1.1 Linux 操作系统要求
操作系统 | 版本 |
---|---|
Red Hat Enterprise Linux | 7.0 及以上 |
CentOS | 7.0 及以上 |
Oracle Enterprise Linux | 7.0 及以上 |
Ubuntu LTS | 16.04 及以上 |
1.1.2 服务器建议配置
Linkis 支持运行在 Intel x86-64 架构的 64 位通用硬件服务器平台。对生产环境的服务器硬件配置有以下建议:
1.1.3 生产环境建议配置
CPU | 内存 | 硬盘类型 | 网络 | 实例数量 |
---|---|---|---|---|
16核+ | 32GB+ | SAS | 千兆网卡 | 1+ |
1.1.4 基础软件安装
以Centos为例,以下为安装基础软件过程
cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e \
's|^#baseurl=http://mirror.centos.org|baseurl=https://mirrors.tuna.tsinghua.edu.cn|g' \
-i.bak \
/etc/yum.repos.d/CentOS-*.repo
yum install -y \
less vim zip unzip gzip tar wget expect iproute openssh-server openssh-clients which binutils \
freetype fontconfig fontconfig-devel chinese-support curl sudo pam_krb5 krb5-workstation krb5-libs krb5-auth-dialog sssd crontabs \
telnet dos2unix sed net-tools python-pip glibc-common mysql libaio numactl initscripts psmisc
yum clean all
1.1.5 nginx安装
echo -e '[nginx]\n\
name=nginx repo\n\
baseurl=http://nginx.org/packages/centos/7/$basearch/\n\
gpgcheck=0\n\
enabled=1' >> /etc/yum.repos.d/nginx.repo
yum -y install nginx
rm -rf /var/cache/yum
1.1.6 hostname设置
hostnamectl status
sudo hostnamectl set-hostname node4
hostname
1.1.7/etc/hosts设置
请注意,必须第一行填写内网实际地址和上面设置hostname的映射。
vim /etc/hosts
192.168.0.81 node4
1.2 用户准备
建议使用hadoop用户进行部署,该用户需要有sudo免密权限、ssh免密权限。如果分布式部署,每个节点之间均需ssh免密权限。
1.2.1 添加部署用户
sudo groupadd hadoop
sudo useradd -g hadoop hadoop
1.2.2 sudo免密
sudo vi /etc/sudoers
hadoop ALL=(ALL) NOPASSWD: NOPASSWD: ALL
1.2.3 ssh免密
su - hadoop
cd ~
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
1.2.4 环境变量设置
注意以下环境变量,请根据实际的安装路径进行修改,以下仅供参考:
vim ~/.bash_profile
# User specific environment and startup programs
export CDH_HOME=/opt/cloudera/parcels/CDH
export JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
export SPARK_HOME=/data/appcom/install/spark-2.4.7-bin-cdp7.1.7
export SPARK_CONF_DIR=${SPARK_HOME}/conf
export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive
export HIVE_CONF_DIR=/etc/hive/conf
export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
export HADOOP_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/data/appcom/install/flink-1.12.2
export FLINK_CONF_DIR=${FLINK_HOME}/conf
export FLINK_LIB_DIR=${FLINK_HOME}/lib
export MAVEN_HOME=/data/appcom/install/apache-maven-3.8.4
export SQOOP_HOME=/data/appcom/install/sqoop-1.4.6.bin__hadoop-2.0.4-alpha
export SQOOP_CONF_DIR=${SQOOP_HOME}/conf
export SEATUNNEL_HOME=/data/appcom/install/seatunnel
export PATH=$SPARK_HOME/bin:$SPARK_HOME/sbin:$CDH_HOME/bin:$SQOOP_HOME/bin:$JAVA_HOME/bin:$PATH:$MAVEN_HOME/bin:$SEATUNNEL_HOME/bin
export PYSPARK_ALLOW_INSECURE_GATEWAY=1
export ENABLE_METADATA_QUERY=true
export LINKIS_HOME=/data/appcom/install/linkis
export SKYWALKING_AGENT_PATH=/data/appcom/install/skywalking-agent/skywalking-agent.jar
export HADOOP_CLASSPATH=`hadoop classpath`
1.3 数据库准备
mysql -u root -p
create user 'linkis'@'%' identified by 'linkis';
DROP DATABASE linkis;
CREATE DATABASE linkis DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
grant all privileges on linkis to 'linkis'@'%';
grant all on linkis.* to linkis@'%';
show grants for linkis@'%';
flush privileges;
1.4 环境验证
source ~/.bash_profile
hadoop dfs -ls /
beeline -u "jdbc:hive2://node4:10000/default"
spark-submit --version
$SPARK_HOME/bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode \
client $SPARK_HOME/examples/jars/spark-examples*.jar 10
$FLINK_HOME/bin/flink run -m yarn-cluster $FLINK_HOME/examples/streaming/WordCount.jar
2 配置修改
2.1 安装介质准备
可以从官网下载或者参照文章进行自行编译。
后端安装包:apache-linkis-1.5.0-bin.tar.gz
前端安装包:apache-linkis-1.5.0-web-bin.tar.gz
mkdir -p /data/appcom/install/linkis_tmp
cp linkis-dist/target/apache-linkis-1.5.0-bin.tar.gz /data/appcom/install/linkis_tmp
cp linkis-web/apache-linkis-1.5.0-web-bin.tar.gz /data/appcom/install/linkis_tmp
cd /data/appcom/install/linkis_tmp
tar -xvf apache-linkis-1.5.0-bin.tar.gz
tar -xvf apache-linkis-1.5.0-web-bin.tar.gz
2.1 修改配置
vim deploy-config/linkis-env.sh
deployUser=hadoop
deployPwd=hadoop
dbType=mysql
WORKSPACE_USER_ROOT_PATH=file:///data/tmp/linkis/
HDFS_USER_ROOT_PATH=hdfs:///tmp/linkis
ENGINECONN_ROOT_PATH=/data/appcom/tmp
RESULT_SET_ROOT_PATH=hdfs:///tmp/linkis
YARN_RESTFUL_URL="http://192.168.0.81:8088"
HIVE_HOME=/usr/local/hive
HIVE_CONF_DIR=/usr/local/hive/conf
SPARK_HOME=/data/appcom/install/spark-2.4.7-bin-2.7.2
SPARK_CONF_DIR=${SPARK_HOME}/conf
SPARK_VERSION=2.4.7
HIVE_VERSION=3.1.3
PYTHON_VERSION=python2
EUREKA_INSTALL_IP=192.168.0.81
EUREKA_HEAP_SIZE="1024M"
GATEWAY_INSTALL_IP=192.168.0.81
GATEWAY_HEAP_SIZE="1024M"
MANAGER_INSTALL_IP=192.168.0.81
MANAGER_HEAP_SIZE="1024M"
ENGINECONNMANAGER_INSTALL_IP=192.168.0.81
ENGINECONNMANAGER_HEAP_SIZE="1024M"
ENTRANCE_INSTALL_IP=192.168.0.81
ENTRANCE_HEAP_SIZE="1024M"
PUBLICSERVICE_INSTALL_IP=192.168.0.81
PUBLICSERVICE_PORT=9105
PUBLICSERVICE_HEAP_SIZE="1024M"
export SERVER_HEAP_SIZE="1024M"
LINKIS_HOME=/data/appcom/install/linkis
LINKIS_EXTENDED_LIB=/data/appcom/common/linkisExtendedLib
LINKIS_VERSION=1.5.0
LINKIS_PUBLIC_MODULE=lib/linkis-commons/public-module
vim deploy-config/db.sh
MYSQL_HOST=192.168.0.81
MYSQL_PORT=3306
MYSQL_DB=linkis
MYSQL_USER=linkis
MYSQL_PASSWORD=linkis
HIVE_META_URL="jdbc:mysql://192.168.0.81:3306/hive?useSSL=false&useUnicode=true"
HIVE_META_USER="root"
HIVE_META_PASSWORD="Root.123456"
3 安装
3.1 后端安装
因环测检测脚本存在BUG,暂时先注释hive检测
vim linkis_tmp/bin/checkEnv.sh
注释第74行
#beeline -u${HIVE_META_URL} -n${HIVE_META_USER} -p${MYSQL_PASSWORD} > /dev/null 2>&1
mkdir -p /data/tmp/linkis/
mkdir -p /data/appcom/tmp
mkdir -p /data/appcom/common/linkisExtendedLib
sh bin/install.sh
选择1、1、2。
执行完安装脚本以后,修改以下配置文件:
- conf/linkis-cli/linkis-cli.properties
wds.linkis.client.common.gatewayUrl=http://192.168.0.81:9001
- sbin/common.sh
第22行修改为实际内网IP ipaddr=192.168.0.81 第44行返回值由1修改为0,代表总是本地执行。 return 0
- 添加MYSQL驱动
cp mysql-connector-java-8.0.28.jar ${LINKIS_HOME}/lib/linkis-spring-cloud-services/linkis-mg-gateway/ cp mysql-connector-java-8.0.28.jar ${LINKIS_HOME}/lib/linkis-commons/public-module/
3.2 前端安装
mkdir -p /data/appcom/install/web/dss/ mv /data/appcom/install/linkis_tmp/dist /data/appcom/install/web/dss/linkis
以下为nginx配置:
vim /usr/local/nginx/conf/nginx.conf
http段增加以下二行配置
include /etc/nginx/conf.d/dss.conf;
控制物料上传大小
client_max_body_size 500m;
vim /etc/nginx/conf.d/dss.conf
server {
listen 8089;
server_name localhost;
location /dss/visualis {
root /data/appcom/install/web;
autoindex on;
}
location /dss/linkis {
root /data/appcom/install/web;
autoindex on;
}
location / {
root /data/appcom/install/web/dist;
index index.html index.html;
}
location /ws {
proxy_pass http://192.168.0.81:9001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
}
location /api {
proxy_pass http://192.168.0.81:9001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header x_real_ipP $remote_addr;
proxy_set_header remote_addr $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_connect_timeout 4s;
proxy_read_timeout 600s;
proxy_send_timeout 12s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
### 3.3 验证
sh $LINKIS_HOME/sbin/linkis-start-all.sh
sh $LINKIS_HOME/bin/linkis-cli -engineType shell-1 -code "whoami" -codeType shell