I commonly work with text files of ~20 Gb size and I find myself counting the number of lines in a given file very often.
The way I do it now it's just cat fname | wc -l
, and it takes very long. Is there any solution that'd be much faster?
I work in a high performance cluster with Hadoop installed. I was wondering if a map reduce approach could help.
I'd like the solution to be as simple as one line run, like the wc -l
solution, but not sure how feasible it is.
Any ideas?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…