Rancher搭建Kubernetes环境

1、安装docker

sudo apt-get update
sudo apt-get install docker.io

2、运行Rancher

mkdir /home/ubuntu/rancker
sudo docker run -d -v /home/ubuntu/rancker:/var/lib/rancher/ --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:stable

3、登录

http://OUTTER_IP
默认用户名admin,需要设置密码
URL一般会填写内网地址,https://172.31.33.84

4、根据向导新建cluster,向导会帮忙生成对应语句
4.1、根据操作指引,在node1运行controlplane和worker

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.2 --server https://172.31.33.84 --token 4rjrgss2hq5w6nlmp4frptxqlq68zr7szvd9fd45pm7rfk968snsjk --ca-checksum 79f195454ab982ce478878f4e5525516ad09d6eadc4c611d4d542da9a7fc6c7e --controlplane --worker

4.2、根据操作指引,在node2运行etcd 和worker

sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.2 --server https://172.31.33.84 --token 4rjrgss2hq5w6nlmp4frptxqlq68zr7szvd9fd45pm7rfk968snsjk --ca-checksum 79f195454ab982ce478878f4e5525516ad09d6eadc4c611d4d542da9a7fc6c7e --etcd --worker

5、在Rancher页面会看到两个节点加入成功

6、设置防火墙,让Rancher可以通过
我这边打开了2379和10250两个端口(之前K8S的端口已经设置过)

7、选择部署APP

比直接部署K8S方便了很多哦!

搭建Kubernetes环境05

1、关闭swap

sudo swapoff -a

2、自动启用docker.service

sudo systemctl enable docker.service

3、cgroup切换为systemd

#参考https://kubernetes.io/docs/setup/cri/

sudo vi /etc/docker/daemon.json
#内容如下
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}

4、一些有用的命令

kubeadm init
kubeadm reset

kubectl api-versions

kubectl config view

kubectl cluster-info
kubectl cluster-info dump

kubectl get nodes
kubectl get nodes -o wide
kubectl describe node mynode

kubectl get rc,namespace

kubectl get pods
kubectl get pods --all-namespaces -o wide
kubectl describe pod mypod

kubectl get deployments
kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard
kubectl describe deployment kubernetes-dashboard --namespace=kubernetes-dashboard

kubectl expose deployment hikub01 --type=LoadBalancer

kubectl get services
kubectl get service -n kube-system
kubectl describe services kubernetes-dashboard --namespace=kubernetes-dashboard

kubectl proxy
kubectl proxy --address=' 172.172.172.101'  --accept-hosts='.*' --accept-paths='.*'

kubectl run hikub01 --image=myserver:1.0.0 --port=8080
kubectl create -f  myserver-deployment.yaml
kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

kubectl delete deployment mydeployment
kubectl delete node mynode
kubectl delete pod mypod

kubectl get events --namespace=kube-system

kubectl taint node mynode node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/master-

kubectl edit service myservice
kubectl edit service kubernetes-dashboard -n kube-system

kubectl get secret -n kube-system | grep neohope | awk '{print $1}')

搭建Kubernetes环境04

本节采用yaml文件部署应用。

1、编写文件
vi hikub01-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: hikub01-deployment
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: myserver
image:  myserver:1.0.0
ports:
-  containerPort: 8080

2、创建

kubectl create -f hikub01-deployment.yaml

3、暴露端口

kubectl expose deployment hikub01-deployment --type=LoadBalancer

4、测试

查看pods
kubectl get pods -o wide

#查看部署
kubectl get deployments -o wide

#查看服务
kubectl get services -o wide

#可以根据输出,在浏览器或wget访问服务
curl http://ip:port

5、清理

kubectl delete -f hikub01-deployment.yaml

搭建Kubernetes环境03

本节,我们尝试部署一些服务。

1、首先,我们要准备自己的Docker镜像
1.1、准备文件
vi Dockerfile

FROM node:6.12.0
EXPOSE 8080
COPY myserver.js .
CMD node myserver.js

vi myserver.js

var http = require('http');

var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello World!');
};

var www = http.createServer(handleRequest);

www.listen(8080);

1.2、测试myserver.js

nodejs myserver.js

1.3、创建镜像

#构建image
sudo docker build -t myserver:1.0.0 .

1.4、测试container

sudo docker run -itd --name=myserver -p8080:8080 myserver:1.0.0
curl localhost:8080

2、导出镜像

docker images
sudo docker save 0fb19de44f41 -o myserver.tar

3、导入到其他两个节点

scp myserver.6.12.0.tar ubuntu@node01:/home/ubuntu
ssh node01
sudo docker load -i myserver.6.12.0.tar
sudo docker tag 0fb19de44f41 myserver:6.12.0

3、用kubectl部署服务

#进行一个部署
kubectl run hikub01 --image=myserver:1.0.0 --port=8080

#暴露服务
kubectl expose deployment hikub01 --type=LoadBalancer

#查看pods
kubectl get pods -o wide

#查看部署
kubectl get deployments -o wide

#查看服务
kubectl get services -o wide

#可以根据输出,在浏览器或wget访问服务
curl http://ip:port

4、清理

#删除服务
kubectl delete service hikub01

#删除部署
kubectl delete deployment hikub01

#删除部署
kubectl delete pod hikub01

搭建Kubernetes环境02

上一节我们搭建了环境,这一节我们部署一些k8s插件。官方插件清单如下:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

本次我们部署两个插件:calico和dashboard

1、由于资源比较少,我们让master也可以进行部署

kubectl taint nodes --all node-role.kubernetes.io/master-

2、部署calico
2.1、部署

kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml

2.2、观察部署情况,等待部署成功

watch kubectl get pods --all-namespaces

3、部署dashboard
3.1、部署

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

3.2、观察部署情况,等待部署成功

watch kubectl get pods --all-namespaces

3.3、启动代理

kubectl proxy

3.4、浏览器可以看到登录页面

http://IP:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

但其实这个地方有个坑,因为dashboard要求https登录,而代理当前为http,所以只有IP为localhost时,才能登录成功。在这里浪费了不少时间。

3.5、新建用户

vi neohope-account.yaml

#文件内容
apiVersion: v1
kind: ServiceAccount
metadata:
name: neohope
namespace: kube-system

kubectl create -f neohope-account.yaml

3.6、用户角色配置

vi neohope-role.yaml

#文件内容
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: neohope
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: neohope
namespace: kube-system

kubectl create -f  neohope-role.yaml

3.7、获取Token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep neohope | awk '{print $1}')
Name:         neohope-token-2khbb
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: neohope
kubernetes.io/service-account.uid: fc842f0e-0ef4-4c41-9f30-8a5409c866c2</none>

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImtIRjFiZnI5V3NiRlpZQXpzUk9DanA4cHBCQnFOcFNrek5xTjltUGRLeDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuZW9ob3BlLXRva2VuLTJraGJiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im5lb2hvcGUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJmYzg0MmYwZS0wZWY0LTRjNDEtOWYzMC04YTU0MDljODY2YzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06bmVvaG9wZSJ9.Zsk4C3hs58JmLa0rRTdfoQKlY524kMtnlzEHxgMryv7u9kPHS51BA0xiVC1nMLDcbMp1U3YHlnz0-IJkFzVeaboq0qEFea56nnqASMSEtCB1c7IE52zip-4tDWdZ-jYwf7KN5Gwq_4ZUqa4gRf1znVH7nlsxTpaoQ_-yjJsQpqDyUb1BLgGrUGcWOF2hGMHrNPHbZfLyhsPp_ijOvmRviAq57nyrGYiVG9ZiMoGV_1Y5rvn2-L0BHCdgZjSzK6nlfvoMlpnqhQXkrxE0d9EJbeukfx5sOF3xUPkQx-6dKm3QrkqBNXInbDxJXJbj27JalGarpRDA9tsPg1mUqAb-7g

3.8、如果是localhost登录,用上面的Token就可以了

http://IP:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

3.9、如果不是localhost登录,有三种方式

#A、暴露端口
#B、通过api server进行代理访问
#https://IP:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
#C、通过插件,用nginx等代理后访问
#为了偷懒,用方案A

3.10、暴露端口

kubectl -n kubernetes-dashboard edit service kubernetes-dashboard
将ClusterIP换为NodePort,然后保存

3.10、查看服务情况

kubectl get service -n kubernetes-dashboard -o wide
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE   SELECTOR
dashboard-metrics-scraper   ClusterIP   10.102.175.21    <none>        8000/TCP        17h   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.102.129.248   <none>        443:31766/TCP   17h   k8s-app=kubernetes-dashboard
#这里可以找到端口31766</none></none>

kubectl get pod -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP                NODE              NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-566cddb686-vkxvx   1/1     Running   0          17h   192.168.201.133   master   <none>           <none>
kubernetes-dashboard-7b5bf5d559-m6xt7        1/1     Running   0          17h   192.168.201.132   master   <none>           <none>
#这里可以找到主机

3.11这样就可以通过地址直接访问master的服务了

https://MASTER_IP:31766

忽略全部HTTPS安全警告
采用Token登录

搭建Kubernetes环境01

1、硬件网络环境
本次采用了云主机,搭建K8S环境
云主机最低配置要求为2Core,4G内存

节点名称 内网地址
master 172.16.172.101
node01 172.16.172.102
node02 172.16.172.103

2、配置k8s仓库,全部三个节点

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo vi /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main

3、安装必要的软件,全部三个节点

sudo apt-get update
sudo apt-get install -y docker.io kubelet kubeadm kubectl

4、初始化master节点
4.1、可以预先拉取镜像

kubeadm config images pull

4.2、初始化kubeadm

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 17.12.1-ce. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu18 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.3.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu18 localhost] and IPs [10.0.3.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.525105 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ubuntu18 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1zdpa5.vmcsacag4wj3a0gv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
--discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

4.3、初始化master节点

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、两个工作节点加入

sudo kubeadm join 172.16.172.101:6443 --token 1zdpa5.vmcsacag4wj3a0gv \
&gt;    --discovery-token-ca-cert-hash sha256:7944eedc04dcc943aa795dc515c4e8cd2f9d78127e1cf88c1931a5778296bb97

[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

6、在master查看节点情况

kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   NotReady   master   15m   v1.16.2
node01   NotReady   <none>   10m   v1.16.2
node02   NotReady   <none>   10m   v1.16.2

Ubuntu18搭建CDH6环境03

1、确保cdt01可以ssh联通cdt02和cdt03

#这个userid与可以无密码使用sudo的userid相同
ssh -l userid cdh02
ssh -l userid cdh03

2、浏览器访问(以后都是界面了)
http://172.16.172.101:7180
用户名:admin
密码:admin

3、根据引导界面,新建Cluster
将172.16.172.101-172.16.172.103都安装好cloudera-manager-agent

4、根据引导界面,选用需要的软件进行安装
安装时,注意合理分配角色,也就是合理分配内存资源

5、依次安装
hdfs
zookeeper
hbase
yarn
hive
spark

6、安装完毕

PS:
1、如果出现找不到jdbc driver的情况

sudo apt-get install libmysql-java

Ubuntu18搭建CDH6环境02

1、cdt01安装

#添加cloudera仓库
wget https://archive.cloudera.com/cm6/6.3.0/ubuntu1804/apt/archive.key
sudo apt-key add archive.key
wget https://archive.cloudera.com/cm6/6.3.0/ubuntu1804/apt/cloudera-manager.list
sudo mv cloudera-manager.list /etc/apt/sources.list.d/

#更新软件清单
sudo apt-get update

#安装jdk8
sudo apt-get install openjdk-8-jdk

#安装cloudera
sudo apt-get install cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server

2、安装及配置mysql
2.1、安装mysql

sudo apt-get install mysql-server mysql-client libmysqlclient-dev libmysql-java

2.2、停止mysql

sudo service mysql stop

2.3、删除不需要的文件

sudo rm /var/lib/mysql/ib_logfile0
sudo rm /var/lib/mysql/ib_logfile1

2.4、修改配置文件

sudo vi /etc/mysql/mysql.conf.d/mysqld.cnf

#修改或添加以下信息
[mysqld]
transaction-isolation = READ-COMMITTED
max_allowed_packet = 32M
max_connections = 300
innodb_flush_method = O_DIRECT

2.5、启动mysql

sudo service mysql start

2.6、初始化mysql

sudo mysql_secure_installation

3、创建数据库并授权

sudo mysql -uroot -p
-- 创建数据库
-- Cloudera Manager Server
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Activity Monitor
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Reports Manager
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Hue
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Hive Metastore Server
CREATE DATABASE hive DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Sentry Server
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Cloudera Navigator Audit Server
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Cloudera Navigator Metadata Server
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-- Oozie
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

#创建用户并授权
GRANT ALL ON scm.* TO 'scm'@'%' IDENTIFIED BY 'scm123456';
GRANT ALL ON amon.* TO 'amon'@'%' IDENTIFIED BY 'amon123456';
GRANT ALL ON rman.* TO 'rman'@'%' IDENTIFIED BY 'rman123456';
GRANT ALL ON hue.* TO 'hue'@'%' IDENTIFIED BY 'hue123456';
GRANT ALL ON hive.* TO 'hive'@'%' IDENTIFIED BY 'hive123456';
GRANT ALL ON sentry.* TO 'sentry'@'%' IDENTIFIED BY 'sentry123456';
GRANT ALL ON nav.* TO 'nav'@'%' IDENTIFIED BY 'nav123456';
GRANT ALL ON navms.* TO 'navms'@'%' IDENTIFIED BY 'navms123456';
GRANT ALL ON oozie.* TO 'oozie'@'%' IDENTIFIED BY 'oozie123456';

4、初始化数据库

sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql scm scm scm123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql amon amon amon123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql rman rman rman123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql hue hue hue123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql hive hive hive123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql sentry sentry sentry123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql nav nav nav123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql navms navms navms123456
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh mysql oozie oozie oozie123456

5、启动

#启动cloudera-scm-server
sudo systemctl start cloudera-scm-server

#查看启动日志,等待Jetty启动完成
sudo tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log

6、启动
浏览器访问
http://172.16.172.101:7180
用户名:admin
密码:admin

Ubuntu18搭建CDH6环境01

1、环境准备

VirtualBox 6
Ubuntu 18
Cloudera CDH 6.3

2、虚拟机安装Ubuntu18,配置为
1CPU
4G内存
300G硬盘
两块网卡,一块为HostOnly,一块为NAT

3、将虚拟机克隆为三份
如果是手工拷贝,记得修改硬盘UUID、虚拟机UUID、网卡硬件ID

4、设置IP地址、hostname及hosts文件

机器名 HostOnly IP
cdh01 172.16.172.101
cdh02 172.16.172.102
cdh03 172.16.172.103

5、允许无密码使用sudo,至少修改cdh02和cdh03

#edit /etc/sudoers
userid ALL=(ALL:ALL) NOPASSWD: ALL

TensorFlow入门02:Tensor

Tensorflow中所有的数据都称为Tensor,可以是一个变量、数组或者多维数组。Tensor 有几个重要的属性:
Rank:纬数,比如scalar rank=0, vector rank=1, matrix rank=2
Shape:形状,比如vector shape=[D0], matrix shape=[D0, D1]
类型:数据类型,比如tf.float32, tc.uint8等

Rank与Shape关系如下表所示

Rank Shape Dimension number Example
0 [] 0-D A 0-D tensor. A scalar.
1 [D0] 1-D A 1-D tensor with shape [5].
2 [D0, D1] 2-D A 2-D tensor with shape [3, 4].
3 [D0, D1, D2] 3-D A 3-D tensor with shape [1, 4, 3].
n [D0, D1, … Dn-1] n-D A tensor with shape [D0, D1, … Dn-1].