# How does RapidMiner calculate Term Frequency (TF)?

It doesn't seem to be #term occurrences / #words in document

For example, in this pretend document "safe safe horizontal counterexample tape tape tape occassion"

i get the following Term Frequencies:

counterexample = .250

horizontal = .250

occassion = .250

safe = .500

tape = .750

Thanks

Neil

For example, in this pretend document "safe safe horizontal counterexample tape tape tape occassion"

i get the following Term Frequencies:

counterexample = .250

horizontal = .250

occassion = .250

safe = .500

tape = .750

Thanks

Neil

0

## Answers

458UnicornIf you add another "tape", does the score change to 1.0?

If you then add another new word, "fred" for example, does the score change to 0.8?

If so, then I reckon the denominator is the number of unique words excluding the word in the numerator.

Andrew

63Maventhanks for the message

i changed the document to "safe safe horizontal counterexample tape tape tape occassion tape"

TF is now

counterexample .209

horizontal .209

occassion .209

safe .417

tape .834

458UnicornHow strange. I repeated your experiment and I reckon the denominator is the square root of the sum of the squares of the frequency of each unique word. The numerator is the frequency of the word being considered.

So in the second example, the sum of the squares of the frequencies is 23 i.e. 4+1+1+16+1 and this would make the tf values 0.209, 0.417 and 0.834.

Adding another "tape" makes the values 0.177, 0.354 and 0.884 which corresponds to a denominator of the square root of 32.

regards

Andrew

63MavenCan someone at R-I confirm this?

Thanks!

Neil

1,751RM Founderhere is the source code of the term frequency calculation:

As you can see, the "expected" term frequency, that is number of occurences of the term in the document divided by the total number of terms in the document is calculated as After that, we normalize the calculated frequencies by the square root of the sum of all frequencies of this document (the part at // Normalize Vector). Please note that this normalization is only done if "term frequency" is selected. If you select TFIDF, the "usual" term frequency is first calculated and multiplied with the IDF part before a similar normalization (dividing by the square root of the sum of all TFIDF values) is performed.

Why do we normalize? Well, this normalization ensures that the L2-norm of the vectors will all be 1. And this make them better suitable for comparisons and similarity calculations and I would recommend this normalization in general over for example simply dividing by the maximum. By the way: with this L2-normalization the cosine similarity simply equals the scalar product of the vectors. Therefore, this normalization is also known as cosine normalization.

Cheers,

Ingo

1Contributor ICould you tell me please, how the "IDF part" is realized? Is there something like: IDF(t) = log (Total number of documents / Number of documents with term t in it)

Thank you.

BR

Thomas

1,869Unicornyes, that's the formula for the inverse document frequency (IDF).

Best regards,

Marius