一次MacOS升级引发的灾难

一、上周帮老婆大人把MacOS从Mojave升级到了Catalina,然后悲剧发生了。
任何需要截屏、屏幕共享或屏幕录制的应用都不能使用了,具体表现为:
1、配置-》安全与隐私里,多了一个选项”屏幕录制/Screen Recording”,里面列表是空的。
2、快捷键录屏是可以的
3、打开QuickPlayer,录制屏幕,提示之前禁用过屏幕录制,要求去修改权限;但由于列表是空的,根本无法做任何操作
4、其他APP,如微信、腾讯会议什么的,提示要增加权限,但“屏幕录制”里还是空的。

二、网上找了一下,无论国内国外,都有一些用户遇到了这个问题。尝试了一些建议的方法,搞不定。

#正常模式下,显示失败
sudo tccutil reset ScreenCapture

#恢复模式下,显示成功,重启后无效
tccutil reset ScreenCapture

三、于是打电话给Apple,一位妹子远程视频帮忙处理了很久,把能重置配置的方式都用了,还是不行。最后建议我重装系统。

四、重装,没任何变化。

五、把系统升级到了Big Sur,好歹有了一点儿进展:
1、配置-》安全与隐私里,”屏幕录制/Screen Recording”,里面列表是空的,但是新增任何APP都无效
2、快捷键录屏是可以的
3、QuickPlayer录制屏幕是可以的
4、其他APP,如微信、腾讯会议什么的,提示要增加权限,但“屏幕录制”里还是空的。

六、找了好长时间,最后终于解决了,步骤如下:
1、重启系统,听到提示音时,按下Command+R,进入恢复模式
2、恢复模式下,开启Terminal,禁用SIP服务 (system integrity protection)

csrutil disable

3、重启系统,进入正常模式,重命名tcc.db文件

sudo mv /Library/Application\ Support/com.apple.TCC/TCC.db /Library/Application\ Support/com.apple.TCC/TCC.db.bak

4、重启系统,进入恢复模式
5、恢复模式下,开启Terminal,启用SIP服务 (system integrity protection)

csrutil enable

6、重启系统,进入正常模式
7、你会发现问题解决了

整体来说,这应该是MacOS升级的一个Bug,出现频率并不高,而且横跨两个大版本至今没有解决。
(也可以这么说,Big Sur解决问题的方式只是把QuickTime加白名单了,根本没有解决根本问题;或者说解决问题的Apple程序员可能根本没有定位到问题产生的原因)

我自己解决这个问题花费了4个小时以上,如果没有编程经验的普通用户遇到,估计只能格式化重装了,希望Apple可以出个工具,解决此类问题,节约大家的时间。

记一次Excutor引发的生产故障

当时看到《阿里巴巴Java编码规范》时,我感觉这里面说的不是大家都该知道的事情吗?真有必要汇总个规范出来?

出来混,总要还的,最近在老项目上就遇到了。

我们有两个服务,遗留项目,代码写的实在是不敢恭维。
用量其实不大,或者说偏小,但很默契的每两周挂一次,到今天是第三次了。

第一次OOM(没定位到根本问题):
K8S网络组件异常,DNS解析失败,和架构一起排查后,发现问题如下:
1、系统原来使用过RabbitMQ,后来代码注释了
2、但POM中,还是有MQ的引用,别的不会做,但会启动MQ的监听
3、结果有一天K8S网络组件异常,DNS解析失败
4、MQ的心跳包瞬间起了几千个线程,连接找不到的MQ Server
5、服务挂了

推断的问题链路为:
DNS无法解析-》MQ心跳包不断起线程,而且没有连接池-》线程耗尽内存-》OOM-》服务挂掉

解决方法:
在POM中,干掉了MQ的引用,以为问题被修复了。

第二次OOM(大意了):
1、表现仍是K8S网络组件异常,DNS解析失败
2、架构感觉容器DNS解析不了应该运维去查
3、只好拉上运维查,运维也比较给力,很快把日志给出来了
4、但是系统OOM时,并没有生成镜像,架构建议添加JVM参数,结果和运维排查后发现JVM参数已经加了,OOM应该有Dump啊
5、最后的结论是,非常规问题,等下次发生再解决
6、这个时候,已经感觉怪怪的了,问题根源根本没有找到啊,但大家都忙着机房搬迁,就没能跟下去(我的问题)

推断的问题链路为:
DNS无法解析-》不断起新线程去连接-》线程耗尽内存-》OOM-》但没有Dump文件-》无法定位问题

解决方法:
碰运气,等下一次OOM生成Dump文件?我当时咋想的呢

第三次OOM(真的解决问题了吗?):
1、感觉不对劲啊,为什么总是两周挂,于是拉取了日志
2、日志上没看出什么来
3、仍然没有OOM的Dump
4、jstack一看线程,傻眼了,一堆线程在waiting on condition
5、赶快看相关源码,居然是用了Excutor,然后线程和队列都是默认的
6、你懂的,线程数和队列默认几乎都是无限的,参数设置错误,导致线程池根本不会服用原来的工作线程
7、服务挂掉是早晚的事情

推断的问题链路为:
DNS无法解析-》不断起新线程去连接-》线程耗尽内存-》OOM-》但没有Dump文件-》无法定位问题

解决方法:
修改Executor默认配置

咋说呢,再次认识到规范强制推行也是必要的,全靠人的个人水平,是不行的。他们的规范,恐怕也是各种坑汇总成的结晶吧。

记一次PG引发的生产故障

前几天,突然收到报修,一个文件接收服务突然宕了。
我们运维同事很快就进行了重启,此时怪异的事情发生了:
重启服务后,服务一会儿就没响应了,端口可以通,文件全部上传失败,没有任何异常日志输出。

那就排查吧:
1、PG数据库连接正常,无死锁,表空间正常,数据查询秒回
2、服务配置几个月都没人改过,CPU、内存、存储、IO、网络,全部正常
3、上传客户端日志输出,显示其工作正常
4、只有文件接收服务,没有响应,也没有日志输出
初步判断,文件接收服务出问题了。

于是,新开一个云主机,重新安装服务,仍然无响应。
然后,拷贝了一个正常工作的主机,修改配置后,仍然无响应。
来来回回折腾了几个小时,还是不行。

无奈之余,试了一下向PG数据库插入数据,我去,几十秒都没有响应。
赶快问DBA,原来是做了高可用,主从库数据同步,从库异常了,导致主库可读不可写。
众人皆表示无奈,重启从库,问题解决。

其实,一开始就排查数据库了,但由于是生产库,只试过查询,没有试过写入。
但是,我们都大意了:
1、服务日志,输出不够详细,就算DEBUG打开,也不知道数据进行到哪一步了,这个最坑
2、没有正确的设置超时时间,原因是接收文件后,写入数据库一直在等待,服务日志没有任何数据库异常
3、数据库监视软件,只做了主库监视,根本没做从库监视
4、数据库主从配置,本应该是异步,但现在配置成了同步
5、没有监视主从库同步的情况
6、生产库,不敢轻易进行写操作,只看了查询效率及死锁,没有看慢语句

就这样一个小问题,绕过了层层监控机制,让问题排查十分困难,花费了大量的人力。
反思起来,如果只是为了记录日志而记录日志,但日志不能反应服务状态,那不如不记;如果只是为了监控而监控,但监控不到位,那不如不监控。
我们日常做事情,也是同样的道理,细节是魔鬼,把该把控的把控好了,才能提升效率,得到想要的结果。

快速提升单元覆盖率

最近到新公司,接手了几十个老项目。由于项目特殊需要,需要快速将一个模块的单元测试覆盖率提升到80%以上。

怀着忐忑的心情看了一下,该模块居然还有一个单元测试,整体覆盖率为0,欲哭无泪啊。

手工写是来不及了,那就想办法自动生成吧。找了一下,最终决定采用EvoSuite。

EvoSuite有多种方式可以配置,包括命令行模式、Maven插件模式以、Eclipse插件模式、IDEA插件模式等。

一、maven模式
1、修改POM文件,在对应位置添加相关内容

<properties>
	<evosuiteVersion>1.0.6</evosuiteVersion>
</properties>

<dependencies>
	<dependency>
		<groupId>junit</groupId>
		<artifactId>junit</artifactId>
		<version>4.12</version>
		<scope>test</scope>
	</dependency>
	<dependency>
		<groupId>org.evosuite</groupId>
		<artifactId>evosuite-standalone-runtime</artifactId>
		<version>${evosuiteVersion}</version>
		<scope>test</scope>
	</dependency>
</dependencies>

<build>
	<pluginManagement>
		<plugins>
			<plugin>
				<groupId>org.eclipse.m2e</groupId>
				<artifactId>lifecycle-mapping</artifactId>
				<version>1.0.0</version>
				<configuration>
					<lifecycleMappingMetadata>
						<pluginExecutions>
							<pluginExecution>
								<pluginExecutionFilter>
									<groupId>org.apache.maven.plugins</groupId>
									<artifactId>maven-compiler-plugin</artifactId>
									<versionRange>[2.5,)</versionRange>
									<goals>
										<goal>prepare</goal>
									</goals>
								</pluginExecutionFilter>
								<action>
									<ignore />
								</action>
							</pluginExecution>
						</pluginExecutions>
					</lifecycleMappingMetadata>
				</configuration>
			</plugin>
		</plugins>
	</pluginManagement>
	<plugins>
		<plugin>
			<groupId>org.evosuite.plugins</groupId>
			<artifactId>evosuite-maven-plugin</artifactId>
			<version>${evosuiteVersion}</version>
			<executions>
				<execution>
					<goals>
						<goal>prepare</goal>
					</goals>
					<phase>process-test-classes</phase>
				</execution>
			</executions>
		</plugin>

		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-surefire-plugin</artifactId>
			<version>2.17</version>
			<configuration>
				<properties>
					<property>
						<name>listener</name>
						<value>org.evosuite.runtime.InitializingListener</value>
					</property>
				</properties>
			</configuration>
		</plugin>

		<!--plugin>
			<groupId>org.codehaus.mojo</groupId>
			<artifactId>build-helper-maven-plugin</artifactId>
			<version>1.8</version>
			<executions>
				<execution>
					<id>add-test-source</id>
					<phase>generate-test-sources</phase>
					<goals>
						<goal>add-test-source</goal>
					</goals>
					<configuration>
						<sources>
							<source>.evosuite/evosuite-tests</source>
						</sources>
					</configuration>
				</execution>
			</executions>
		</plugin-->

		<plugin>
			<artifactId>maven-compiler-plugin</artifactId>
			<version>3.8.1</version>
			<configuration>
				<source>1.8</source>
				<target>1.8</target>
			</configuration>
		</plugin>
	</plugins>
</build>

2、生成单元测试

mvn -DmemoryInMB=4000 -Dcores=4 evosuite:generate test

#生成的单元测试在
#.evosuite/best-tests
#拷贝到正确的路径就可以了

二、命令行模式
1、下载evosuite-1.0.6.jar包

Downloads

2、收集项目依赖,把evosuite-1.0.6.jar也放入target/dependency文件夹

mvn dependency:copy-dependencies

3、生成单元测试

cd target/dependency
java -jar evosuite-1.0.6.jar -help
java -Duse_separate_classloader=false -jar evosuite-1.0.6.jar -projectCP YOUR_CLASS_PATH -generateSuite -target ..\classes

#生成的单元测试在
#target/dependency/evosuite-tests
#拷贝到正确的路径就可以了

三、Eclipse插件模式
在eclipse中安装evosuite插件,需要额外的插件地址:
http://www.evosuite.org/update

四、单元覆盖率
1、插件安装
在eclipse中搜索并安装EclEmma Java Code Coverage插件,直接搜索即可

2、修改class loader配置

#默认使用单独的class loader,覆盖率会为0
separateClassLoader = true
#全局替换为
separateClassLoader = false

3、然后在项目上,右键,Coverage as-》JUnit Test
就可以看到覆盖率了哦。
我试过两个项目,一个简单的项目,覆盖率为95以上。
一个复杂一些的Web项目,覆盖率仅为30%左右。

五、总结
生成的单元测试,实际上没有什么维护性,如何用于生产环境,待探索。

TiDB环境搭建

本节采用单机环境,搭建TiDB测试环境。
全程云环境部署,操作系统为CentOS7.6,用户为root。

1、修改ssh配置

# 提高连接数
vi /etc/ssh/sshd_config 
MaxSessions 20

#重启sshd
service sshd restart

2、安装tidb

# 系统更新
yum -y update

# 安装tidb源
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

# 安装tiup cluster
source .bash_profile
tiup cluster

3、新建cluster配置文件

# 新建配置文件
vi mytidb.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
 tiflash:
   logger.level: "info"

pd_servers:
 - host: 192.168.1.111

tidb_servers:
 - host: 192.168.1.111

tikv_servers:
 - host: 192.168.1.111
   port: 20160
   status_port: 20180

 - host: 192.168.1.111
   port: 20161
   status_port: 20181

 - host: 192.168.1.111
   port: 20162
   status_port: 20182

tiflash_servers:
 - host: 192.168.1.111

monitoring_servers:
 - host: 192.168.1.111

grafana_servers:
 - host: 192.168.1.111

4、应用cluster

#应用配置文件
tiup cluster deploy mytidb v4.0.0 ./mytidb.yaml --user root -i hwk8s.pem
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.7/tiup-cluster deploy mytidb v4.0.0 ./mytidb.yaml --user root -i hwk8s.pem
Please confirm your topology:
TiDB Cluster: mytidb
TiDB Version: v4.0.0
Type        Host           Ports                            OS/Arch       Directories
----        ----           -----                            -------       -----------
pd          192.168.1.111  2379/2380                        linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv        192.168.1.111  20160/20180                      linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv        192.168.1.111  20161/20181                      linux/x86_64  /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv        192.168.1.111  20162/20182                      linux/x86_64  /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tidb        192.168.1.111  4000/10080                       linux/x86_64  /tidb-deploy/tidb-4000
tiflash     192.168.1.111  9000/8123/3930/20170/20292/8234  linux/x86_64  /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus  192.168.1.111  9090                             linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana     192.168.1.111  3000                             linux/x86_64  /tidb-deploy/grafana-3000
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.0 (linux/amd64) ... Done
  - Download tikv:v4.0.0 (linux/amd64) ... Done
  - Download tidb:v4.0.0 (linux/amd64) ... Done
  - Download tiflash:v4.0.0 (linux/amd64) ... Done
  - Download prometheus:v4.0.0 (linux/amd64) ... Done
  - Download grafana:v4.0.0 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 192.168.1.111:22 ... Done
+ Copy files
  - Copy pd -> 192.168.1.111 ... Done
  - Copy tikv -> 192.168.1.111 ... Done
  - Copy tikv -> 192.168.1.111 ... Done
  - Copy tikv -> 192.168.1.111 ... Done
  - Copy tidb -> 192.168.1.111 ... Done
  - Copy tiflash -> 192.168.1.111 ... Done
  - Copy prometheus -> 192.168.1.111 ... Done
  - Copy grafana -> 192.168.1.111 ... Done
  - Copy node_exporter -> 192.168.1.111 ... Done
  - Copy blackbox_exporter -> 192.168.1.111 ... Done
+ Check status
Deployed cluster `mytidb` successfully, you can start the cluster via `tiup cluster start mytidb`

#启用cluster
tiup cluster start mytidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.7/tiup-cluster start mytidb
Starting cluster mytidb...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [Parallel] - UserSSH: user=tidb, host=192.168.1.111
+ [ Serial ] - ClusterOperate: operation=StartOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:5 OptTimeout:60 APITimeout:300 IgnoreConfigCheck:false RetainDataRoles:[] RetainDataNodes:[]}
Starting component pd
        Starting instance pd 192.168.1.111:2379
        Start pd 192.168.1.111:2379 success
Starting component node_exporter
        Starting instance 192.168.1.111
        Start 192.168.1.111 success
Starting component blackbox_exporter
        Starting instance 192.168.1.111
        Start 192.168.1.111 success
Starting component tikv
        Starting instance tikv 192.168.1.111:20162
        Starting instance tikv 192.168.1.111:20161
        Starting instance tikv 192.168.1.111:20160
        Start tikv 192.168.1.111:20162 success
        Start tikv 192.168.1.111:20161 success
        Start tikv 192.168.1.111:20160 success
Starting component tidb
        Starting instance tidb 192.168.1.111:4000
        Start tidb 192.168.1.111:4000 success
Starting component tiflash
        Starting instance tiflash 192.168.1.111:9000
        Start tiflash 192.168.1.111:9000 success
Starting component prometheus
        Starting instance prometheus 192.168.1.111:9090
        Start prometheus 192.168.1.111:9090 success
Starting component grafana
        Starting instance grafana 192.168.1.111:3000
        Start grafana 192.168.1.111:3000 success
Checking service state of pd
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:37 CST; 13s ago
Checking service state of tikv
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:38 CST; 12s ago
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:38 CST; 12s ago
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:38 CST; 12s ago
Checking service state of tidb
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:42 CST; 9s ago
Checking service state of tiflash
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:45 CST; 5s ago
Checking service state of prometheus
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:47 CST; 4s ago
Checking service state of grafana
        192.168.1.111      Active: active (running) since Thu 2020-07-02 11:38:47 CST; 4s ago
+ [ Serial ] - UpdateTopology: cluster=mytidb
Started cluster `mytidb` successfully

5、查看cluster状态

#查看cluster清单
tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.7/tiup-cluster list
Name    User  Version  Path                                         PrivateKey
----    ----  -------  ----                                         ----------
mytidb  tidb  v4.0.0   /root/.tiup/storage/cluster/clusters/mytidb  /root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa


#查看cluster详情
tiup cluster display mytidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.0.7/tiup-cluster display mytidb
TiDB Cluster: mytidb
TiDB Version: v4.0.0
ID                   Role        Host           Ports                            OS/Arch       Status   Data Dir                    Deploy Dir
--                   ----        ----           -----                            -------       ------   --------                    ----------
192.168.1.111:3000   grafana     192.168.1.111  3000                             linux/x86_64  Up       -                           /tidb-deploy/grafana-3000
192.168.1.111:2379   pd          192.168.1.111  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379          /tidb-deploy/pd-2379
192.168.1.111:9090   prometheus  192.168.1.111  9090                             linux/x86_64  Up       /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
192.168.1.111:4000   tidb        192.168.1.111  4000/10080                       linux/x86_64  Up       -                           /tidb-deploy/tidb-4000
192.168.1.111:9000   tiflash     192.168.1.111  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
192.168.1.111:20160  tikv        192.168.1.111  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
192.168.1.111:20161  tikv        192.168.1.111  20161/20181                      linux/x86_64  Up       /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
192.168.1.111:20162  tikv        192.168.1.111  20162/20182                      linux/x86_64  Up       /tidb-data/tikv-20162       /tidb-deploy/tikv-20162

6、mysql客户端操作tidb

#安装源
wget https://repo.mysql.com//mysql80-community-release-el7-3.noarch.rpm
rpm -Uvh mysql80-community-release-el7-3.noarch.rpm

#安装mysql客户端
yum install mysql-community-client.x86_64

#登录tidb
mysql -h 192.168.1.111 -P 4000 -u root

#和普通mysql操作区别很小

7、查看tidb管理界面

# 性能监控
http://192.168.1.111:3000 
admin/admin

# 管理界面
http://192.168.1.111:2379/dashboard 
root/空

InfluxDB环境搭建06

本节用HTTP方式读写InfluxDB数据。

1、InfluxDB API路径

Endpoint Description
/debug/pprof Generate profiles for troubleshooting
/debug/requests Track HTTP client requests to the /write and /query endpoints
/debug/vars Collect internal InfluxDB statistics
/ping Check the status of your InfluxDB instance and your version of InfluxDB
/query Query data using InfluxQL, manage databases, retention policies, and users
/write Write data to a database

2、ping服务状态

curl -i 'http://localhost:8086/ping'
HTTP/1.1 204 No Content
Content-Type: application/json
Request-Id: ff6febe5-bb85-11ea-8060-fa163e4dc996
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.8.0
X-Request-Id: ff6febe5-bb85-11ea-8060-fa163e4dc996
Date: Wed, 01 Jul 2020 10:31:17 GMT

3、查看并新建数据库

curl -i -XPOST http://localhost:8086/query --data-urlencode "q=show databases"
HTTP/1.1 200 OK
Content-Type: application/json
Request-Id: 4eddc157-bb86-11ea-8061-fa163e4dc996
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.8.0
X-Request-Id: 4eddc157-bb86-11ea-8061-fa163e4dc996
Date: Wed, 01 Jul 2020 10:33:31 GMT
Transfer-Encoding: chunked
{"results":[{"statement_id":0,"series":[{"name":"databases","columns":["name"],"values":[["_internal"],["NOAA_water_database"]]}]}]}


curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE mydb"
HTTP/1.1 200 OK
Content-Type: application/json
Request-Id: 05a455eb-bb89-11ea-8062-fa163e4dc996
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.8.0
X-Request-Id: 05a455eb-bb89-11ea-8062-fa163e4dc996
Date: Wed, 01 Jul 2020 10:52:56 GMT
Transfer-Encoding: chunked
{"results":[{"statement_id":0}]}

4、写入数据

curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1422568543700000000'
HTTP/1.1 204 No Content
Content-Type: application/json
Request-Id: 171405ad-bb8a-11ea-8063-fa163e4dc996
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.8.0
X-Request-Id: 171405ad-bb8a-11ea-8063-fa163e4dc996
Date: Wed, 01 Jul 2020 11:00:35 GMT

curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'cpu_load_short,host=server02,region=asia-east value=0.67 1422568543700000000
> cpu_load_short,host=server02,region=us-west value=0.55 1422568543900000000
> cpu_load_short,host=server01,region=asia-east value=2.0 1422568543900000000'
HTTP/1.1 204 No Content
Content-Type: application/json
Request-Id: 1ad799eb-bb8a-11ea-8064-fa163e4dc996
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.8.0
X-Request-Id: 1ad799eb-bb8a-11ea-8064-fa163e4dc996
Date: Wed, 01 Jul 2020 11:00:41 GMT

5、查询数据

# curl双引号里面支持转义符、支持变量
# curl单引号里面不支持转义符、不支持变量
curl -i -XPOST 'http://localhost:8086/query?pretty=true&db=mydb' --data-binary "q=select * from cpu_load_short where \"region\"='us-west'"
HTTP/1.1 200 OK
Content-Type: application/json
Request-Id: d976bd19-bb8c-11ea-8076-fa163e4dc996
X-Influxdb-Build: OSS
X-Influxdb-Version: 1.8.0
X-Request-Id: d976bd19-bb8c-11ea-8076-fa163e4dc996
Date: Wed, 01 Jul 2020 11:20:20 GMT
Transfer-Encoding: chunked
{
    "results": [
        {
            "statement_id": 0,
            "series": [
                {
                    "name": "cpu_load_short",
                    "columns": [
                        "time",
                        "host",
                        "region",
                        "value"
                    ],
                    "values": [
                        [
                            "2015-01-29T21:55:43.7Z",
                            "server01",
                            "us-west",
                            0.64
                        ],
                        [
                            "2015-01-29T21:55:43.9Z",
                            "server02",
                            "us-west",
                            0.55
                        ]
                    ]
                }
            ]
        }
    ]
}

InfluxDB环境搭建05

本节给出了InfluxQL的函数清单。

1、InfluxQL函数清单

分类 函数 功能
Aggregations COUNT() 计数
Aggregations DISTINCT() 数据去重
Aggregations INTEGRAL() 数值曲线包含的面积
Aggregations MEAN() 平均数
Aggregations MEDIAN() 中位数
Aggregations MODE() 频次最高的数据
Aggregations SPREAD() 最大值于最小值时间的差
Aggregations STDDEV() 标准差
Aggregations SUM() 求和
Selectors BOTTOM() 返回最小的数据集
Selectors FIRST() 返回最旧的数据
Selectors LAST() 返回最新的数据
Selectors MAX() 最大值
Selectors MIN() 最小值
Selectors PERCENTILE() 百分位数数据
Selectors SAMPLE() 随机抽样
Selectors TOP() 返回最大的数据集
Transformations ABS() 绝对值
Transformations ACOS() 反余弦
Transformations ASIN() 反正弦
Transformations ATAN() 反正切,一和四象限
Transformations ATAN2() 反正切,四个象限
Transformations CEIL() 向上取整
Transformations COS() 余弦
Transformations CUMULATIVE_SUM() 序列从第一个值的连续求和
Transformations DERIVATIVE() 相邻序列值之间的差除以时间差
Transformations DIFFERENCE() 相邻序列值之间的差
Transformations ELAPSED() 时间戳差异
Transformations EXP() 指数
Transformations FLOOR() 向下取整
Transformations HISTOGRAM() Flux提供的功能,将序列数值近似的转换为指定的直方图分布
Transformations LN() 自然对数
Transformations LOG() 对数
Transformations LOG2() 2为底的对数
Transformations LOG10() 10为底的对数
Transformations MOVING_AVERAGE() 滚进计算序列平均值
Transformations NON_NEGATIVE_DERIVATIVE() 相邻序列值之间的差除以时间差,仅包括非负值
Transformations NON_NEGATIVE_DIFFERENCE() 相邻序列值之间的差,仅包括非负值
Transformations POW()
Transformations ROUND() 四舍五入
Transformations SIN() 正弦
Transformations SQRT() 平方根
Transformations TAN() 正切
Predictors HOLT_WINTERS() 预测
Technical Analysis CHANDE_MOMENTUM_OSCILLATOR() 钱德动量振荡器(CMO):通过计算所有最近的较高数据点和所有最近的较低数据点的总和之间的差值,然后将结果除以给定时间段内所有数据移动的总和来创建的。结果乘以100,得到-100到+100的范围。
Technical Analysis EXPONENTIAL_MOVING_AVERAGE() 指数移动平均线:类似于简单移动平均线,但给最新数据更多的权重。这种移动平均线比简单的移动平均线对最近的数据变化反应更快。
Technical Analysis DOUBLE_EXPONENTIAL_MOVING_AVERAGE() 双指数移动平均线:将均线的值翻倍,同时为了使其与实际数据保持一致,并消除滞后,会从之前翻倍的均线中减去“均线的均线”值。
Technical Analysis KAUFMANS_EFFICIENCY_RATIO() 考夫曼的效率比:是通过将一段时间内的数据变化除以为实现该变化而发生的数据移动的绝对总和来计算的。最终的比率在0到1之间,较高的值代表更有效或更有趋势的市场。
Technical Analysis KAUFMANS_ADAPTIVE_MOVING_AVERAGE() 考夫曼自适应移动平均线:旨在考虑样本噪声或波动性,当数据波动相对较小且噪声较低时,KAMA将密切跟踪数据点;当数据波动变大时,KAMA将进行调整,并从更远的距离跟踪数据。该趋势跟踪指示器可用于识别总体趋势、时间转折点和过滤数据移动。
Technical Analysis TRIPLE_EXPONENTIAL_MOVING_AVERAGE() 三重指数移动平均线:是用来过滤传统移动平均线的波动性,实际上是单指数移动平均线、双指数移动平均线和三倍指数移动平均线的组合。
Technical Analysis TRIPLE_EXPONENTIAL_DERIVATIVE() 三重指数衍生指标:是一个用来识别超卖和超买市场的振荡器,也可以用作动量指标。TRIX计算一段时间内输入数据的对数的三重指数移动平均值。从前一个值中减去前一个值。这可防止指示器考虑比规定周期短的周期。
Technical Analysis RELATIVE_STRENGTH_INDEX() 相对强度指数:一个动量指标,用于比较特定时间段内最近的上升和下降幅度,以测量数据移动的速度和变化。

2、time函数时间单位

单位 含义
ns nanoseconds
u or µ microseconds
ms milliseconds
s seconds
m minutes
h hours
d days
w weeks

InfluxDB环境搭建04

本节讲解InfluxDB的部分高级功能

时间序列数据处理的数据量会十分大,而且很多数据只是在一定时期内意义会更大一些,比如服务器性能日志。
所以,一般来说近期数据会按采样间隔全量储存,越远的数据,需要保存的采样间隔就会越大。
比如,服务器性能日志:
近一周的,可以按10s采样一次
近一月的,可以按5m采样一次
近三个月的,可以按1h采样一次
近半年的,可以按1d采样一次
近一年的,可以按1w采样一次
再之前的,可以删除
这样降低采样率的操作,在时间序列数据库中的应用场景很多,所以一般都会进行支持。
InfluxDB,采用了两个方法来解决这个问题:
通过Continuous Query定期降低采样频率
通过Retention Policy定期删除高频采样数据

1、Continuous Query

> SHOW CONTINUOUS QUERIES

> CREATE CONTINUOUS QUERY "my_cq" ON "NOAA_water_database"
RESAMPLE EVERY 1w
BEGIN
  SELECT MEAN("water_level") INTO "water_level_averages_coyote_creek_4w" FROM "h2o_feet" WHERE "location" = 'coyote_creek' GROUP BY time(4w)
END

> CREATE CONTINUOUS QUERY "my_cq" ON "NOAA_water_database"
BEGIN
  SELECT MEAN("water_level") INTO "transportation"."weeks24"."water_level_averages_coyote_creek_4w" FROM "h2o_feet" WHERE "location" = 'coyote_creek' GROUP BY time(4w)
END

> DROP CONTINUOUS QUERY "my_cq" ON "NOAA_water_database"

2、Retention Policy

> CREATE RETENTION POLICY "my_rp" ON "my_database" DURATION 3d REPLICATION 1 SHARD DURATION 1h

> CREATE RETENTION POLICY "my_rp" ON "my_database" DURATION 3d REPLICATION 1 SHARD DURATION 1h default

> DROP RETENTION POLICY "my_rp" ON "my_database"

3、新建数据库

# 默认RP是autogen 
> CREATE DATABASE "my_database"
# 新建数据库,默认RP是my_rp
# my_rp:数据库数据保留3天,shard保留1份数据,shard group按1小时保留数据
> CREATE DATABASE "my_database" WITH DURATION 3d REPLICATION 1 SHARD DURATION 1h NAME "my_rp"

4、删除数据库

> drop database NOAA_water_database

5、删除series

> DROP SERIES FROM "h2o_feet"
> DROP SERIES FROM "h2o_feet" WHERE "location" = 'santa_monica'
> DROP SERIES WHERE "location" = 'santa_monica'

6、删除series中数据

> DELETE FROM "h2o_feet"
> DELETE FROM "h2o_quality" WHERE "randtag" = '3'
> DELETE WHERE time < '2016-01-01'

7、删除measurement

> DROP MEASUREMENT "h2o_feet"

8、删除shard

> show shards
name: _internal
id database  retention_policy shard_group start_time           end_time             expiry_time          owners
-- --------  ---------------- ----------- ----------           --------             -----------          ------
18 _internal monitor          18          2020-07-01T00:00:00Z 2020-07-02T00:00:00Z 2020-07-09T00:00:00Z

name: NOAA_water_database
id database            retention_policy shard_group start_time           end_time             expiry_time          owners
-- --------            ---------------- ----------- ----------           --------             -----------          ------
19 NOAA_water_database autogen          19          2019-08-05T00:00:00Z 2019-08-12T00:00:00Z 2019-08-12T00:00:00Z
10 NOAA_water_database autogen          10          2019-08-12T00:00:00Z 2019-08-19T00:00:00Z 2019-08-19T00:00:00Z
11 NOAA_water_database autogen          11          2019-08-19T00:00:00Z 2019-08-26T00:00:00Z 2019-08-26T00:00:00Z
12 NOAA_water_database autogen          12          2019-08-26T00:00:00Z 2019-09-02T00:00:00Z 2019-09-02T00:00:00Z
13 NOAA_water_database autogen          13          2019-09-02T00:00:00Z 2019-09-09T00:00:00Z 2019-09-09T00:00:00Z
14 NOAA_water_database autogen          14          2019-09-09T00:00:00Z 2019-09-16T00:00:00Z 2019-09-16T00:00:00Z
15 NOAA_water_database autogen          15          2019-09-16T00:00:00Z 2019-09-23T00:00:00Z 2019-09-23T00:00:00Z

> drop shard 10

9、删除measurement

> DROP MEASUREMENT "h2o_feet"

10、删除measurement

> DROP MEASUREMENT "h2o_feet"

11、杀死慢查询

> show queries
qid query        database            duration status
--- -----        --------            -------- ------
78  SHOW QUERIES NOAA_water_database 58µs     running

> kill query 78

InfluxDB环境搭建03

本节讲解InfluxDB的查询

1、普通查询

influx -database NOAA_water_database
Connected to http://localhost:8086 version 1.8.0
InfluxDB shell version: 1.8.0

> SELECT * FROM "h2o_feet" WHERE time>1568750040000000000 ORDER BY time DESC
name: h2o_feet
time                level description    location     water_level
----                -----------------    --------     -----------
1568756520000000000 between 3 and 6 feet santa_monica 4.938
1568756160000000000 between 3 and 6 feet santa_monica 5.066
1568755800000000000 between 3 and 6 feet santa_monica 5.01
1568755440000000000 between 3 and 6 feet santa_monica 5.013
1568755080000000000 between 3 and 6 feet santa_monica 5.072
1568754720000000000 between 3 and 6 feet santa_monica 5.213
1568754360000000000 between 3 and 6 feet santa_monica 5.341
1568754000000000000 between 3 and 6 feet santa_monica 5.338
1568753640000000000 between 3 and 6 feet santa_monica 5.322
1568753280000000000 between 3 and 6 feet santa_monica 5.24
1568752920000000000 between 3 and 6 feet santa_monica 5.302
1568752560000000000 between 3 and 6 feet santa_monica 5.62
1568752200000000000 between 3 and 6 feet santa_monica 5.604
1568751840000000000 between 3 and 6 feet santa_monica 5.502
1568751480000000000 between 3 and 6 feet santa_monica 5.551
1568751120000000000 between 3 and 6 feet santa_monica 5.459
1568750760000000000 between 3 and 6 feet santa_monica 5.62
1568750400000000000 between 3 and 6 feet santa_monica 5.627

> SELECT "location","water_level" FROM "h2o_feet" WHERE time>1568750040000000000
name: h2o_feet
time                location     water_level
----                --------     -----------
1568750400000000000 santa_monica 5.627
1568750760000000000 santa_monica 5.62
1568751120000000000 santa_monica 5.459
1568751480000000000 santa_monica 5.551
1568751840000000000 santa_monica 5.502
1568752200000000000 santa_monica 5.604
1568752560000000000 santa_monica 5.62
1568752920000000000 santa_monica 5.302
1568753280000000000 santa_monica 5.24
1568753640000000000 santa_monica 5.322
1568754000000000000 santa_monica 5.338
1568754360000000000 santa_monica 5.341
1568754720000000000 santa_monica 5.213
1568755080000000000 santa_monica 5.072
1568755440000000000 santa_monica 5.013
1568755800000000000 santa_monica 5.01
1568756160000000000 santa_monica 5.066
1568756520000000000 santa_monica 4.938

> SELECT *::field FROM "h2o_feet" WHERE time>1568750040000000000
name: h2o_feet
time                level description    water_level
----                -----------------    -----------
1568750400000000000 between 3 and 6 feet 5.627
1568750760000000000 between 3 and 6 feet 5.62
1568751120000000000 between 3 and 6 feet 5.459
1568751480000000000 between 3 and 6 feet 5.551
1568751840000000000 between 3 and 6 feet 5.502
1568752200000000000 between 3 and 6 feet 5.604
1568752560000000000 between 3 and 6 feet 5.62
1568752920000000000 between 3 and 6 feet 5.302
1568753280000000000 between 3 and 6 feet 5.24
1568753640000000000 between 3 and 6 feet 5.322
1568754000000000000 between 3 and 6 feet 5.338
1568754360000000000 between 3 and 6 feet 5.341
1568754720000000000 between 3 and 6 feet 5.213
1568755080000000000 between 3 and 6 feet 5.072
1568755440000000000 between 3 and 6 feet 5.013
1568755800000000000 between 3 and 6 feet 5.01
1568756160000000000 between 3 and 6 feet 5.066
1568756520000000000 between 3 and 6 feet 4.938

> SELECT "water_level"-3 FROM "h2o_feet" WHERE time>1568750040000000000
name: h2o_feet
time                water_level
----                -----------
1568750400000000000 2.627
1568750760000000000 2.62
1568751120000000000 2.4589999999999996
1568751480000000000 2.551
1568751840000000000 2.502
1568752200000000000 2.604
1568752560000000000 2.62
1568752920000000000 2.3019999999999996
1568753280000000000 2.24
1568753640000000000 2.322
1568754000000000000 2.338
1568754360000000000 2.341
1568754720000000000 2.213
1568755080000000000 2.072
1568755440000000000 2.013
1568755800000000000 2.01
1568756160000000000 2.066
1568756520000000000 1.9379999999999997

> SELECT * FROM "NOAA_water_database"."autogen"."h2o_feet" WHERE time>1568750040000000000
name: h2o_feet
time                level description    location     water_level
----                -----------------    --------     -----------
1568750400000000000 between 3 and 6 feet santa_monica 5.627
1568750760000000000 between 3 and 6 feet santa_monica 5.62
1568751120000000000 between 3 and 6 feet santa_monica 5.459
1568751480000000000 between 3 and 6 feet santa_monica 5.551
1568751840000000000 between 3 and 6 feet santa_monica 5.502
1568752200000000000 between 3 and 6 feet santa_monica 5.604
1568752560000000000 between 3 and 6 feet santa_monica 5.62
1568752920000000000 between 3 and 6 feet santa_monica 5.302
1568753280000000000 between 3 and 6 feet santa_monica 5.24
1568753640000000000 between 3 and 6 feet santa_monica 5.322
1568754000000000000 between 3 and 6 feet santa_monica 5.338
1568754360000000000 between 3 and 6 feet santa_monica 5.341
1568754720000000000 between 3 and 6 feet santa_monica 5.213
1568755080000000000 between 3 and 6 feet santa_monica 5.072
1568755440000000000 between 3 and 6 feet santa_monica 5.013
1568755800000000000 between 3 and 6 feet santa_monica 5.01
1568756160000000000 between 3 and 6 feet santa_monica 5.066
1568756520000000000 between 3 and 6 feet santa_monica 4.938

> SELECT * FROM "h2o_feet" WHERE "water_level" > 9.9
name: h2o_feet
time                level description         location     water_level
----                -----------------         --------     -----------
1566975960000000000 at or greater than 9 feet coyote_creek 9.902
1566976320000000000 at or greater than 9 feet coyote_creek 9.938
1566976680000000000 at or greater than 9 feet coyote_creek 9.957
1566977040000000000 at or greater than 9 feet coyote_creek 9.964
1566977400000000000 at or greater than 9 feet coyote_creek 9.954
1566977760000000000 at or greater than 9 feet coyote_creek 9.941
1566978120000000000 at or greater than 9 feet coyote_creek 9.925
1566978480000000000 at or greater than 9 feet coyote_creek 9.902
1567380600000000000 at or greater than 9 feet coyote_creek 9.902

> SELECT * FROM "h2o_feet" WHERE "level description" = 'below 3 feet' and "water_level" >= 3

> SELECT "water_level" FROM "h2o_feet" WHERE "location" = 'santa_monica' and "water_level" < -0.2
name: h2o_feet
time                water_level
----                -----------
1566988560000000000 -0.243
1567077840000000000 -0.21

2、group by查询

> SELECT MEAN("water_level") FROM "h2o_feet" GROUP BY "location"
name: h2o_feet
tags: location=coyote_creek
time mean
---- ----
0    5.3591424203039155

name: h2o_feet
tags: location=santa_monica
time mean
---- ----
0    3.5307120942458807

> SELECT COUNT("water_level") FROM "h2o_feet" WHERE "location"='coyote_creek' GROUP BY time(4w)
name: h2o_feet
time                count
----                -----
1565222400000000000 4559
1567641600000000000 3045
1570060800000000000 0
1572480000000000000 0
1574899200000000000 0
1577318400000000000 0
1579737600000000000 0
1582156800000000000 0
1584576000000000000 0
1586995200000000000 0
1589414400000000000 0
1591833600000000000 0

> SELECT * FROM "h2o_feet" WHERE time > now() - 7d
> SELECT * FROM "h2o_feet" WHERE time = '2020-07-01T00:00:00Z'

3、分页查询

> SELECT "water_level","location" FROM "h2o_feet" LIMIT 3
name: h2o_feet
time                water_level location
----                ----------- --------
1566000000000000000 8.12        coyote_creek
1566000000000000000 2.064       santa_monica
1566000360000000000 8.005       coyote_creek

> SELECT "water_level","location" FROM "h2o_feet" LIMIT 3 OFFSET 3
name: h2o_feet
time                water_level location
----                ----------- --------
1566000360000000000 2.116       santa_monica
1566000720000000000 7.887       coyote_creek
1566000720000000000 2.028       santa_monica

> SELECT "water_level" FROM "h2o_feet" GROUP BY * LIMIT 3 SLIMIT 1
name: h2o_feet
tags: location=coyote_creek
time                water_level
----                -----------
1566000000000000000 8.12
1566000360000000000 8.005
1566000720000000000 7.887

> SELECT "water_level","location" FROM "h2o_feet" LIMIT 3 SLIMIT 1 SOFFSET 1

4、带正则表达式的查询

> SELECT /l/ FROM "h2o_feet" LIMIT 1
name: h2o_feet
time                level description location     water_level
----                ----------------- --------     -----------
1566000000000000000 below 3 feet      santa_monica 2.064

> SELECT MEAN("degrees") FROM /temperature/
name: average_temperature
time mean
---- ----
0    79.98472932232272

name: h2o_temperature
time mean
---- ----
0    64.98872722506226

> SELECT MEAN(water_level) FROM "h2o_feet" WHERE "location" =~ /[m]/ AND "water_level" > 3
name: h2o_feet
time mean
---- ----
0    4.471366691627881

> SELECT FIRST("index") FROM "h2o_quality" GROUP BY /l/
name: h2o_quality
tags: location=coyote_creek
time                first
----                -----
1566000000000000000 41

name: h2o_quality
tags: location=santa_monica
time                first
----                -----
1566000000000000000 99

> SELECT MEAN("water_level") FROM "h2o_feet" WHERE "location" = 'santa_monica' AND "level description" =~ /between/
name: h2o_feet
time mean
---- ----
0    4.471366691627881

5、嵌套查询

> SELECT SUM("max") FROM (SELECT MAX("water_level")  AS "max" FROM "h2o_feet" GROUP BY "location")
name: h2o_feet
time sum
---- ---
0    17.169

6、SELECT INTO

> SELECT "water_level" INTO "h2o_feet_coyote_creek" FROM "h2o_feet" WHERE "location" = 'coyote_creek'
name: result
time written
---- -------
0    7604
> SELECT MEAN("water_level") INTO "water_level_averages_4w" FROM "h2o_feet" WHERE "location" = 'coyote_creek' GROUP BY time(4w)
name: result
time written
---- -------
0    2

7、整库复制

> CREATE DATABASE noaa_water_db
> SELECT * INTO "NOAA_water_database_copy"."autogen".:MEASUREMENT FROM "NOAA_water_database"."autogen"./.*/ GROUP BY *

InfluxDB环境搭建02

本节讲解InfluxDB的安装及测试数据导入

1、InfluxDB安装

# 配置apt库
wget -qO- https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/lsb-release
echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list

# 安装influxdb 
sudo apt-get update
sudo apt-get install influxdb

# 服务启动
sudo service influxdb start
# 或
sudo systemctl start influxdb

# 前台运行
influxd -config /etc/influxdb/influxdb.conf

# 查看日志
journalctl -u influxdb
# 日志输出到文件
journalctl -u influxdb> influxd.log

# 默认开启端口
#TCP 8086 客户端操作
#TCP 8088 备份及还原

# 默认关闭端口
# TCP 2003 Graphite service
# TCP 4242 OpenTSDB service
# UDP 8089 UDP service
# TCp 25826 Collectd service

2、测试数据导入

# 获取数据
wget https://s3.amazonaws.com/noaa.water-database/NOAA_data.txt

# 创建数据库
influx 
Connected to http://localhost:8086 version 1.8.0
InfluxDB shell version: 1.8.0
> CREATE DATABASE noaa_water_db
> exit

# 导入数据
influx -import -path=NOAA_data.txt -precision=s -database=NOAA_water_database
2020/06/15 16:34:37 Processed 1 commands
2020/06/15 16:34:37 Processed 76290 inserts
2020/06/15 16:34:37 Failed 0 inserts

3、查看数据库情况

# 查看数据库
influx
Connected to http://localhost:8086 version 1.8.0
InfluxDB shell version: 1.8.0

> show databases
name: databases
name
----
_internal
NOAA_water_database

> use NOAA_water_database
Using database NOAA_water_database

> show measurements
name: measurements
name
----
average_temperature
h2o_feet
h2o_pH
h2o_quality
h2o_temperature

> show retention policies
name    duration shardGroupDuration replicaN default
----    -------- ------------------ -------- -------
autogen 0s       168h0m0s           1        true

> show series
key
---
average_temperature,location=coyote_creek
average_temperature,location=santa_monica
h2o_feet,location=coyote_creek
h2o_feet,location=santa_monica
h2o_pH,location=coyote_creek
h2o_pH,location=santa_monica
h2o_quality,location=coyote_creek,randtag=1
h2o_quality,location=coyote_creek,randtag=2
h2o_quality,location=coyote_creek,randtag=3
h2o_quality,location=santa_monica,randtag=1
h2o_quality,location=santa_monica,randtag=2
h2o_quality,location=santa_monica,randtag=3
h2o_temperature,location=coyote_creek
h2o_temperature,location=santa_monica

> show tag keys
name: average_temperature
tagKey
------
location

name: h2o_feet
tagKey
------
location

name: h2o_pH
tagKey
------
location

name: h2o_quality
tagKey
------
location
randtag

name: h2o_temperature
tagKey
------
location

> show tag values with key="location"
name: average_temperature
key      value
---      -----
location coyote_creek
location santa_monica

name: h2o_feet
key      value
---      -----
location coyote_creek
location santa_monica

name: h2o_pH
key      value
---      -----
location coyote_creek
location santa_monica

name: h2o_quality
key      value
---      -----
location coyote_creek
location santa_monica

name: h2o_temperature
key      value
---      -----
location coyote_creek
location santa_monica

> show field keys
name: average_temperature
fieldKey fieldType
-------- ---------
degrees  float

name: h2o_feet
fieldKey          fieldType
--------          ---------
level description string
water_level       float

name: h2o_pH
fieldKey fieldType
-------- ---------
pH       float

name: h2o_quality
fieldKey fieldType
-------- ---------
index    float

name: h2o_temperature
fieldKey fieldType
-------- ---------
degrees  float