As Spark is moving to the V2 API, you now have to implement DataSourceV2, MicroBatchReadSupport, and DataSourceRegister.
This will involve creating your own implementation of Offset
, MicroBatchReader
, DataReader<Row>
, and DataReaderFactory<Row>
.
There are some examples of custom structured streaming examples online (in Scala) which were helpful to me in writing mine.
Once you've implemented your custom source, you can follow Jacek Laskowski's answer in registering the source.
Also, depending on the encoding of messages you'll receive from the socket, you may be able to just use the default socket source and use a custom map function to parse the information into whatever Beans you'll be using. Although do note that Spark says that the default socket streaming source shouldn't be used in production!
Hope this helps!
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…