The token filters - like the ASCIIFoldingFilter - are at their base a TokenStream, so they are something that the Analyzer returns mainly by use of the following method:
public abstract TokenStream tokenStream(String fieldName, Reader reader);
As you have noticed, the filters take a TokenStream as an input. They act like wrappers or, more correctly said, like decorators to their input. That means they enhance the behavior of the contained TokenStream, performing both their operation and the operation of the contained input.
You can find an explanation here. It is not directly refering to an ASCIIFoldingFilter but the same principle applies. Basically, you create a custom Analyzer with something like this in it (stripped down example):
public class CustomAnalyzer extends Analyzer {
// other content omitted
// ...
public TokenStream tokenStream(String fieldName, Reader reader) {
TokenStream result = new StandardTokenizer(reader);
result = new StandardFilter(result);
result = new LowerCaseFilter(result);
// etc etc ...
result = new StopFilter(result, yourSetOfStopWords);
result = new ASCIIFoldingFilter(result);
return result;
}
// ...
}
Both the TokenFilter and the Tokenizer are subclasses of TokenStream.
Remember also that you must make use of the same custom analyzer both in indexing and searching or you might get incorrect results in your queries.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…