Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
572 views
in Technique[技术] by (71.8m points)

python - Pyspark: Select part of the string(file path) column values

Pyspark: Split and select part of the string column values

How can I select the characters or file path after the 4th(from left) backslash from the column in a spark DF?

Sample rows of the pyspark column:

\DDevjohnnyDesktopTEST
\DDevmattDesktopTESTNEW
\DDevmattDesktopTESTOLDTEST
\EdevpeterDesktopRUNSUBFOLDERNew
\K924produmsDesktopRUNSUBFOLDERNew
\LE345jskx
fkDesktopRUNSUBFOLDERNew
.
.
.
\ls53f7sn3vsohskmwqsdsfkse

Expected Output

johnnyDesktopTEST
mattDesktopTESTNEW
mattDesktopTESTOLDTEST
peterDesktopRUNSUBFOLDERNew
umsDesktopRUNSUBFOLDERNew
rfkDesktopRUNSUBFOLDERNew
.
.
.
vsohskmwqsdsfkse

My previous question led to this new question. Appreciate any help.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

You may use a regular expression in regexp_replace eg.

from pyspark.sql import functions as F

df = df.withColumn('sub_path',F.regexp_replace("path","^\\\\[a-zA-Z0-9]+\\[a-zA-Z0-9]+\\",""))

you may also be more flexible with this solution eg.

from pyspark.sql import functions as F
no_of_slashes=4 # number of slashes to consider here

# we build the regular expression by repeating `"[a-zA-Z0-9]+\\"`
# NB. We subtract 2 since we start with the frst 2 slashes
df = df.withColumn('sub_path',F.regexp_replace("path","^\\\\"+("[a-zA-Z0-9]+\\"*(no_of_slashes-2)),""))

Let me know if this works for you.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...