Tokenization of Arabic words.
Please, could you guide me in the problem that related to the Tokenize parameter?
After applying the Tokenization, the output was only an index of records. Where are tokens?
I have tried more than one dataset, and the problem has not changed!
Thank you in advance.