数据源:flume采集到的端口
2022/8/31 23:22:49
本文主要是介绍数据源:flume采集到的端口,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
推送式
- 将flume采集的数据主动推送给Spark程序,容易导致Spark程序接受数据出问题,推送式整合是基于avro端口下沉地方式完成
- 引入SparkStreaming和Flume整合的依赖
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.11</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-flume_2.11</artifactId> <version>2.3.1</version> </dependency> </dependencies>
- 定义Flume采集数据进程脚本,把sink下沉地指定为avro类型的端口下沉底
[root@node1 data]# vi portToSpark.conf #指定agent的sources,sinks,channels a1.sources = s1 a1.sinks = k1 a1.channels = c1 #配置sources属性 a1.sources.s1.type = netcat a1.sources.s1.bind = node1 a1.sources.s1.port = 44444 #配置sink a1.sinks.k1.type = avro a1.sinks.k1.hostname = node1 a1.sinks.k1.port = 8888 a1.sinks.k1.batch-size = 1 #配置channel类型 a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 #整合flume进程中source channel sink a1.sources.s1.channels = c1 a1.sinks.k1.channel = c1
- 通过FileUtils.createStream方法从avro的端口中获取flume采集到avro端口的实时数据
package SparkStreaming.flume import org.apache.spark.SparkConf import org.apache.spark.storage.StorageLevel import org.apache.spark.streaming.flume.FlumeUtils import org.apache.spark.streaming.{Seconds, StreamingContext} object ByFlumePush { def main(args: Array[String]): Unit = { val conf = new SparkConf().setMaster("local[3]").setAppName("hdfs") val ssc: StreamingContext = new StreamingContext(conf, Seconds(10)) val ds = FlumeUtils.createStream(ssc, "node1", 8888, StorageLevel.MEMORY_ONLY) ds.print() ssc.start() ssc.awaitTermination() } }
- 启动
1. 启动flume flume-ng agent -n a1 -f portToSpark.conf -Dflume.root.logger=INFO 2. 运行主类,将java代码打包上传到node1上 spark-submit --class flume.Demo01 ssc.jar 3. 开启监听的端口号 [root@node1 ~]# telnet node1 44444
- 注意:
必须保证Spark Streaming运行程序和Flume采集进程在同一个节点上,保证Spark Streaming打包的jar包必须把spark-streaming-flume_2.11:2.3.1版本的依赖包全部打包到jar包中
(这里用的别人打的包,保存在G://shixun//ssc.jar路径下了)
拉取式
- 将Flume采集的数据发送给sink了,sink并不是直接把数据立马给了Spark,而是先把数据缓冲,Spark接收器可以按照我的需求主动去sink中拉取数据.
拉取式整合方式是基于Spark下沉地完成----建议使用 - 引入依赖:
<dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.11</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-flume_2.11</artifactId> <version>2.3.1</version> </dependency> </dependencies>
- 定义flume脚本文件,和上面的方式同,但把sink的下沉地改为SparkSink
[root@node1 data]# vi portToSpark2.conf #指定agent的sources,sinks,channels a1.sources = s1 a1.sinks = k1 a1.channels = c1 #配置sources属性 a1.sources.s1.type = netcat a1.sources.s1.bind = node1 a1.sources.s1.port = 44444 #配置sink a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink a1.sinks.k1.hostname = node1 a1.sinks.k1.port = 8888 a1.sinks.k1.batch-size = 1 #配置channel类型 a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 #整合flume进程中source channel sink a1.sources.s1.channels = c1 a1.sinks.k1.channel = c1
- 定义读取方法
package SparkStreaming.flume import org.apache.spark.SparkConf import org.apache.spark.storage.StorageLevel import org.apache.spark.streaming.flume.FlumeUtils import org.apache.spark.streaming.{Seconds, StreamingContext} object ByFlumePush { def main(args: Array[String]): Unit = { val conf = new SparkConf().setMaster("local[3]").setAppName("hdfs") val ssc: StreamingContext = new StreamingContext(conf, Seconds(10)) val ds = FlumeUtils.createPollingStream(ssc, "node1", 8888, StorageLevel.MEMORY_ONLY) ds.print() ssc.start() ssc.awaitTermination() } }
- [注意]:
SparkStreaming的依赖jar包复制到flume软件的lib目录下,把spark-streaming-flume的依赖jar包放到flume软件的lib目录下
[root@node1 jars]# pwd /opt/app/spark-2.3.1/jars [root@node1 jars]# cp spark-streaming_2.11-2.3.1.jar /opt/app/flume-1.8.0/lib/ [root@node1 data]# pwd /opt/data [root@node1 data]# cp ssc.jar /opt/app/flume-1.8.0/lib/
(ssc.jar为别人打的包,保存在G://shixun//ssc.jar路径下了)
- 启动
[root@node1 data]# flume-ng agent -n a1 -f portToSpark2.conf -Dflume.root.logger=INFO,console [root@node1 data]# spark-submit --class flume.ByFlumePush ssc2.jar [root@node1 data]# telnet node1 44444
这篇关于数据源:flume采集到的端口的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-23Springboot应用的多环境打包入门
- 2024-11-23Springboot应用的生产发布入门教程
- 2024-11-23Python编程入门指南
- 2024-11-23Java创业入门:从零开始的编程之旅
- 2024-11-23Java创业入门:新手必读的Java编程与创业指南
- 2024-11-23Java对接阿里云智能语音服务入门详解
- 2024-11-23Java对接阿里云智能语音服务入门教程
- 2024-11-23JAVA对接阿里云智能语音服务入门教程
- 2024-11-23Java副业入门:初学者的简单教程
- 2024-11-23JAVA副业入门:初学者的实战指南