Options

Filter tokens (by Pos Tags) without generating n-grams

HeikoeWin786HeikoeWin786 Member Posts: 64 Contributor II
Dear all,

I am performing process documents from data using tokenize, transform cases, filter tokens by length, filter stopwords (English), stem(Porter) and filter token (by pos tags).

It is taking so long to run like almost 6 hours.

I am not sure if I am doing things incorrectly.
May I know if it is ok to use Filter tokens (by Pos Tags) without generating n-grams? or, we must generate the n-grams first?

thanks

Best Answer

  • Options
    jacobcybulskijacobcybulski Member, University Professor Posts: 391 Unicorn
    edited December 2020 Solution Accepted
    I think this is happening because Porter (alike Snowball) stemmer is algorithmic and does not create parts of speech tags. For the POS filter to work you may need to use a dictionary-based stemmer, such as WordNet. Try to skip the POS filter and see if this makes any difference.

Answers

  • Options
    jacobcybulskijacobcybulski Member, University Professor Posts: 391 Unicorn
    Also as a test, downsample your documents to just a few to see if it goes through these at all. 
  • Options
    jacobcybulskijacobcybulski Member, University Professor Posts: 391 Unicorn
    edited December 2020
    n-grams make tokens out of a pair or 3 tokens which commonly go together, such as not-bad - they will have no impact on pos-tag filtering. However, if you use n-grams it will slow processing considerably. 
  • Options
    HeikoeWin786HeikoeWin786 Member Posts: 64 Contributor II
    @jacobcybulski

    Thanks a lot. Based on your input, I did some research and I am very much clear now.
Sign In or Register to comment.