I'm writing a Spark (v1.6.0) batch job which reads from a Kafka topic.
For this I can use org.apache.spark.streaming.kafka.KafkaUtils#createRDD
however,
I need to set the offsets for all the partitions and also need to store them somewhere (ZK? HDFS?) to know from where to start the next batch job.
What is the right approach to read from Kafka in a batch job?
I'm also thinking about writing a streaming job instead, which reads from auto.offset.reset=smallest
and saves the checkpoint
to HDFS and then in the next run it starts from that.
But in this case how can I just fetch once and stop streaming after the first batch?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…