Kouhei Sutou
null+****@clear*****
Sun Oct 26 10:41:59 JST 2014
Kouhei Sutou 2014-10-26 10:41:59 +0900 (Sun, 26 Oct 2014) New Revision: affeeacff8841b856c2c8fd9b835f910e293dd46 https://github.com/groonga/groonga/commit/affeeacff8841b856c2c8fd9b835f910e293dd46 Message: doc: fix English Modified files: doc/source/reference/token_filters.rst Modified: doc/source/reference/token_filters.rst (+2 -2) =================================================================== --- doc/source/reference/token_filters.rst 2014-10-26 10:39:23 +0900 (c7846e5) +++ doc/source/reference/token_filters.rst 2014-10-26 10:41:59 +0900 (96003ce) @@ -43,7 +43,7 @@ Here are the list of available token filters: in searching the documents. ``TokenFilterStopWord`` can specify stop word after adding the -documents, because It removes token in searching the documents. +documents because it removes token in searching the documents. The stop word is specified ``is_stop_word`` column on lexicon table. @@ -72,7 +72,7 @@ Here is an example that uses ``TokenFilterStopWord`` token filter: ``TokenFilterStem`` ^^^^^^^^^^^^^^^^^^^ -``TokenFilterStem`` stemming tokenized token. +``TokenFilterStem`` stems tokenized token. Here is an example that uses ``TokenFilterStem`` token filter: -------------- next part -------------- HTML����������������������������... Descargar