good multinomial Naive Bayes or libSVM Text Classification

7JK77JK7 Member Posts: 2 Learner I

Hi everyone,

 

I am relatively new to RapidMiner and machine learning in general. In the forum I couldn't find anything that would solve my problem, sorry if I missed something there.

 

I want to build a model which classifies the correct party from a given speech. I have a large corpus with around 20 000 speeches, which are labeled with one of the six parties. Therefore I made a little pre processing (transform cases, filter stopwords, stem...) and built a model via split validation (70% train, 30% test). I tried a few classification methods like Naive Bayes, libSVM and multinomial Naive Bayes (weka extension).

 

Only with the normal Naive Bayes i got a result with around 40% accuracy. For libSVM and multinomial Naive Bayes (MNB) almost all speeches are predicted to one party.

 

But in the literature the latter two models are recommended for text classification, that's why I'm here to ask, if someone could give me an advice how to implement a better libSVM/MNB. I am also happy about any other helpful advice.

 

Thanks

 

Jan



<?xml version="1.0" encoding="UTF-8"?><process version="9.8.000"><br>  <context><br>    <input/><br>    <output/><br>    <macros/><br>  </context><br>  <operator activated="true" class="process" compatibility="9.8.000" expanded="true" name="Process"><br>    <parameter key="logverbosity" value="init"/><br>    <parameter key="random_seed" value="2001"/><br>    <parameter key="send_mail" value="never"/><br>    <parameter key="notification_email" value=""/><br>    <parameter key="process_duration_for_mail" value="30"/><br>    <parameter key="encoding" value="SYSTEM"/><br>    <process expanded="true"><br>      <operator activated="true" class="read_excel" compatibility="9.8.000" expanded="true" height="68" name="Read Excel" width="90" x="45" y="34"><br>        <parameter key="excel_file" value="H:\Daten\speeches_Thüringen_speech_party_speaker.xlsx"/><br>        <parameter key="sheet_selection" value="sheet number"/><br>        <parameter key="sheet_number" value="1"/><br>        <parameter key="imported_cell_range" value="A1:B10485776"/><br>        <parameter key="encoding" value="SYSTEM"/><br>        <parameter key="first_row_as_names" value="true"/><br>        <list key="annotations"/><br>        <parameter key="date_format" value=""/><br>        <parameter key="time_zone" value="SYSTEM"/><br>        <parameter key="locale" value="German (Germany)"/><br>        <parameter key="read_all_values_as_polynominal" value="false"/><br>        <list key="data_set_meta_data_information"><br>          <parameter key="0" value="speech.true.polynominal.attribute"/><br>          <parameter key="1" value="party.true.polynominal.label"/><br>        </list><br>        <parameter key="read_not_matching_values_as_missings" value="false"/><br>        <parameter key="datamanagement" value="double_array"/><br>        <parameter key="data_management" value="auto"/><br>      </operator><br>      <operator activated="true" class="nominal_to_text" compatibility="9.8.000" expanded="true" height="82" name="Nominal to Text" width="90" x="179" y="34"><br>        <parameter key="attribute_filter_type" value="single"/><br>        <parameter key="attribute" value="speech"/><br>        <parameter key="attributes" value=""/><br>        <parameter key="use_except_expression" value="false"/><br>        <parameter key="value_type" value="nominal"/><br>        <parameter key="use_value_type_exception" value="false"/><br>        <parameter key="except_value_type" value="file_path"/><br>        <parameter key="block_type" value="single_value"/><br>        <parameter key="use_block_type_exception" value="false"/><br>        <parameter key="except_block_type" value="single_value"/><br>        <parameter key="invert_selection" value="false"/><br>        <parameter key="include_special_attributes" value="false"/><br>      </operator><br>      <operator activated="false" class="sample" compatibility="9.8.000" expanded="true" height="82" name="Sample" width="90" x="313" y="34"><br>        <parameter key="sample" value="absolute"/><br>        <parameter key="balance_data" value="false"/><br>        <parameter key="sample_size" value="2000"/><br>        <parameter key="sample_ratio" value="0.1"/><br>        <parameter key="sample_probability" value="0.1"/><br>        <list key="sample_size_per_class"/><br>        <list key="sample_ratio_per_class"/><br>        <list key="sample_probability_per_class"/><br>        <parameter key="use_local_random_seed" value="false"/><br>        <parameter key="local_random_seed" value="1992"/><br>      </operator><br>      <operator activated="true" class="text:process_document_from_data" compatibility="9.3.001" expanded="true" height="82" name="Process Documents from Data" width="90" x="447" y="34"><br>        <parameter key="create_word_vector" value="true"/><br>        <parameter key="vector_creation" value="TF-IDF"/><br>        <parameter key="add_meta_information" value="true"/><br>        <parameter key="keep_text" value="true"/><br>        <parameter key="prune_method" value="none"/><br>        <parameter key="prune_below_percent" value="3.0"/><br>        <parameter key="prune_above_percent" value="30.0"/><br>        <parameter key="prune_below_absolute" value="2"/><br>        <parameter key="prune_above_absolute" value="9999"/><br>        <parameter key="prune_below_rank" value="0.05"/><br>        <parameter key="prune_above_rank" value="0.95"/><br>        <parameter key="datamanagement" value="double_sparse_array"/><br>        <parameter key="data_management" value="auto"/><br>        <parameter key="select_attributes_and_weights" value="false"/><br>        <list key="specify_weights"/><br>        <process expanded="true"><br>          <operator activated="true" class="text:transform_cases" compatibility="9.3.001" expanded="true" height="68" name="Transform Cases" width="90" x="112" y="85"><br>            <parameter key="transform_to" value="lower case"/><br>          </operator><br>          <operator activated="true" class="text:tokenize" compatibility="9.3.001" expanded="true" height="68" name="Tokenize" width="90" x="246" y="85"><br>            <parameter key="mode" value="non letters"/><br>            <parameter key="characters" value=".:"/><br>            <parameter key="language" value="English"/><br>            <parameter key="max_token_length" value="3"/><br>          </operator><br>          <operator activated="true" class="text:filter_stopwords_german" compatibility="9.3.001" expanded="true" height="68" name="Filter Stopwords (German)" width="90" x="380" y="85"><br>            <parameter key="stop_word_list" value="Standard"/><br>          </operator><br>          <operator activated="true" class="text:stem_snowball" compatibility="9.3.001" expanded="true" height="68" name="Stem (Snowball)" width="90" x="514" y="85"><br>            <parameter key="language" value="German"/><br>          </operator><br>          <operator activated="true" class="text:filter_by_length" compatibility="9.3.001" expanded="true" height="68" name="Filter Tokens (by Length)" width="90" x="648" y="85"><br>            <parameter key="min_chars" value="4"/><br>            <parameter key="max_chars" value="999"/><br>          </operator><br>          <operator activated="false" class="text:generate_n_grams_terms" compatibility="9.3.001" expanded="true" height="68" name="Generate n-Grams (Terms)" width="90" x="782" y="85"><br>            <parameter key="max_length" value="2"/><br>          </operator><br>          <operator activated="false" class="text:extract_token_number" compatibility="9.3.001" expanded="true" height="68" name="Extract Token Number" width="90" x="916" y="85"><br>            <parameter key="metadata_key" value="token_number"/><br>            <parameter key="condition" value="all"/><br>            <parameter key="case_sensitive" value="false"/><br>            <parameter key="invert_condition" value="false"/><br>          </operator><br>          <connect from_port="document" to_op="Transform Cases" to_port="document"/><br>          <connect from_op="Transform Cases" from_port="document" to_op="Tokenize" to_port="document"/><br>          <connect from_op="Tokenize" from_port="document" to_op="Filter Stopwords (German)" to_port="document"/><br>          <connect from_op="Filter Stopwords (German)" from_port="document" to_op="Stem (Snowball)" to_port="document"/><br>          <connect from_op="Stem (Snowball)" from_port="document" to_op="Filter Tokens (by Length)" to_port="document"/><br>          <connect from_op="Filter Tokens (by Length)" from_port="document" to_port="document 1"/><br>          <portSpacing port="source_document" spacing="0"/><br>          <portSpacing port="sink_document 1" spacing="0"/><br>          <portSpacing port="sink_document 2" spacing="0"/><br>        </process><br>      </operator><br>      <operator activated="true" class="select_attributes" compatibility="9.8.000" expanded="true" height="82" name="Select Attributes" width="90" x="581" y="34"><br>        <parameter key="attribute_filter_type" value="no_missing_values"/><br>        <parameter key="attribute" value=""/><br>        <parameter key="attributes" value=""/><br>        <parameter key="use_except_expression" value="false"/><br>        <parameter key="value_type" value="attribute_value"/><br>        <parameter key="use_value_type_exception" value="false"/><br>        <parameter key="except_value_type" value="time"/><br>        <parameter key="block_type" value="attribute_block"/><br>        <parameter key="use_block_type_exception" value="false"/><br>        <parameter key="except_block_type" value="value_matrix_row_start"/><br>        <parameter key="invert_selection" value="false"/><br>        <parameter key="include_special_attributes" value="false"/><br>      </operator><br>      <operator activated="false" class="filter_examples" compatibility="9.8.000" expanded="true" height="103" name="Filter Examples" width="90" x="715" y="34"><br>        <parameter key="parameter_expression" value=""/><br>        <parameter key="condition_class" value="custom_filters"/><br>        <parameter key="invert_filter" value="false"/><br>        <list key="filters_list"><br>          <parameter key="filters_entry_key" value="token_number.ge.5"/><br>        </list><br>        <parameter key="filters_logic_and" value="true"/><br>        <parameter key="filters_check_metadata" value="true"/><br>      </operator><br>      <operator activated="true" class="split_validation" compatibility="9.8.000" expanded="true" height="145" name="Validation" width="90" x="849" y="34"><br>        <parameter key="create_complete_model" value="false"/><br>        <parameter key="split" value="relative"/><br>        <parameter key="split_ratio" value="0.7"/><br>        <parameter key="training_set_size" value="100"/><br>        <parameter key="test_set_size" value="-1"/><br>        <parameter key="sampling_type" value="automatic"/><br>        <parameter key="use_local_random_seed" value="false"/><br>        <parameter key="local_random_seed" value="1992"/><br>        <process expanded="true"><br>          <operator activated="true" class="weka:W-NaiveBayesMultinomial" compatibility="7.3.000" expanded="true" height="82" name="W-NaiveBayesMultinomial" width="90" x="246" y="34"><br>            <parameter key="D" value="false"/><br>          </operator><br>          <connect from_port="training" to_op="W-NaiveBayesMultinomial" to_port="training set"/><br>          <connect from_op="W-NaiveBayesMultinomial" from_port="model" to_port="model"/><br>          <portSpacing port="source_training" spacing="0"/><br>          <portSpacing port="sink_model" spacing="0"/><br>          <portSpacing port="sink_through 1" spacing="0"/><br>        </process><br>        <process expanded="true"><br>          <operator activated="true" class="apply_model" compatibility="9.8.000" expanded="true" height="82" name="Apply Model" width="90" x="112" y="34"><br>            <list key="application_parameters"/><br>            <parameter key="create_view" value="false"/><br>          </operator><br>          <operator activated="true" class="performance" compatibility="9.8.000" expanded="true" height="82" name="Performance" width="90" x="313" y="34"><br>            <parameter key="use_example_weights" value="true"/><br>          </operator><br>          <connect from_port="model" to_op="Apply Model" to_port="model"/><br>          <connect from_port="test set" to_op="Apply Model" to_port="unlabelled data"/><br>          <connect from_op="Apply Model" from_port="labelled data" to_op="Performance" to_port="labelled data"/><br>          <connect from_op="Performance" from_port="performance" to_port="averagable 1"/><br>          <portSpacing port="source_model" spacing="0"/><br>          <portSpacing port="source_test set" spacing="0"/><br>          <portSpacing port="source_through 1" spacing="0"/><br>          <portSpacing port="sink_averagable 1" spacing="0"/><br>          <portSpacing port="sink_averagable 2" spacing="0"/><br>          <portSpacing port="sink_averagable 3" spacing="0"/><br>        </process><br>      </operator><br>      <connect from_op="Read Excel" from_port="output" to_op="Nominal to Text" to_port="example set input"/><br>      <connect from_op="Nominal to Text" from_port="example set output" to_op="Process Documents from Data" to_port="example set"/><br>      <connect from_op="Process Documents from Data" from_port="example set" to_op="Select Attributes" to_port="example set input"/><br>      <connect from_op="Select Attributes" from_port="example set output" to_op="Validation" to_port="training"/><br>      <connect from_op="Validation" from_port="model" to_port="result 2"/><br>      <connect from_op="Validation" from_port="training" to_port="result 1"/><br>      <connect from_op="Validation" from_port="averagable 1" to_port="result 3"/><br>      <portSpacing port="source_input 1" spacing="0"/><br>      <portSpacing port="sink_result 1" spacing="0"/><br>      <portSpacing port="sink_result 2" spacing="0"/><br>      <portSpacing port="sink_result 3" spacing="0"/><br>      <portSpacing port="sink_result 4" spacing="0"/><br>    </process><br>  </operator><br></process><br><br>


Answers

  • Telcontar120Telcontar120 Moderator, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn
    You might consider using a different kernel in your SVM or also try a Random Forest.  It's hard to know specifically why those algorithms are not working well without doing a deep dive into your data.  Also, what is the baseline (default model) accuracy?  It's hard to know whether 40% is a signifiant improvement or not without that piece of information.
    Preprocessing of text can also be critical.  Did you look at additional things like creating n-grams and then culling the resulting tokens based on the different frequency methods? 
    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
  • 7JK77JK7 Member Posts: 2 Learner I
    Thanks for your answer @Telcontar120,

    baseline accuracy is around 27,5%, when I predict just the majority class of my Dataset. Today finally I found suitable settings for libSVM with a linear Kernel and C=32, which gave me around 70% accuracy. I am still learning, but sometimes it just helps to think together with another person, so I really appreciate it.





  • Telcontar120Telcontar120 Moderator, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn
    Of course!  You can also try using the Grid Optimization operator to do a thorough search of different kernel, C, and gamma options if you are using an SVM algorithm.
    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
Sign In or Register to comment.