text mining problem

Mohamad1367Mohamad1367 Member Posts: 22 Contributor I
edited June 2020 in Help
hi dear community
i have a question about filter stop word (dictionary) operator that i asked in previous post but didn't answer anyone. when i apply this operator after tokenizing the eaxample set i  don't recieve filtered output what is the cause of this problem?

Answers

  • Telcontar120Telcontar120 Moderator, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn
    Are you sure there are tokens in your dataset that are also in your stopwords dictionary file? Is your dictionary file correctly formatted?  Are you receiving any error messages?  Do you have an example process?  The question is pretty vague right now and without more information it will probably be difficult for anyone to provide more specific guidance.
    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
  • Mohamad1367Mohamad1367 Member Posts: 22 Contributor I
    edited June 2020
    hi @Telcontar120 thank you for your response...i think that everything looks good ...i share my process, stop word dictionary and data set here ...thank you very much for your help

  • sara20sara20 Member Posts: 110 Unicorn
    edited June 2020
    سلام وقت بخیر
    متاسفانه روال پردازش شما در سیستم بنده کار نمیکند.  در حال حاضر مشکل چی هست؟
    سارا
  • Mohamad1367Mohamad1367 Member Posts: 22 Contributor I
    @sara20 سلام وقت بخیر ممنون که پاسخ دادین
    من یه دیتاستی دارم از نظرات هتل که هر نظر یه کلاسی داره و این کلاس از 1 تا 5 هست
    که کلاس شماره 1 به معنی منفی و کلاس شماره 5 به معنی خیلی مثبته
    حالا وقتی میام فیلتر کنم استپ ورد هارو بعد اینکه توکنایز کردم فقط توکنایز شده دیتاستم رو میبینم و اثری از دیتاستی که استپ وردهاش فیلتر شده نمیبینم تو نتیجه ..من کد فرایندمو اینجا براتون میذارم شاید براتون اجرا بشه اسکرین شات فرایندم رو هم میذارم که اگه اجرا نشد از رو اسکرین شات اگه امکانش هست کمکم کنید
     
    <?xml version="1.0" encoding="UTF-8"?><process version="9.7.000">
      <context>
        <input/>
        <output/>
        <macros/>
      </context>
      <operator activated="true" class="process" compatibility="9.7.000" expanded="true" name="Process">
        <parameter key="logverbosity" value="init"/>
        <parameter key="random_seed" value="2001"/>
        <parameter key="send_mail" value="never"/>
        <parameter key="notification_email" value=""/>
        <parameter key="process_duration_for_mail" value="30"/>
        <parameter key="encoding" value="SYSTEM"/>
        <process expanded="true">
          <operator activated="true" class="read_excel" compatibility="9.7.000" expanded="true" height="68" name="Read Excel" width="90" x="45" y="34">
            <parameter key="excel_file" value="C:\Users\Administrator\Desktop\test.xlsx"/>
            <parameter key="sheet_selection" value="sheet number"/>
            <parameter key="sheet_number" value="1"/>
            <parameter key="imported_cell_range" value="A1"/>
            <parameter key="encoding" value="SYSTEM"/>
            <parameter key="first_row_as_names" value="true"/>
            <list key="annotations"/>
            <parameter key="date_format" value=""/>
            <parameter key="time_zone" value="SYSTEM"/>
            <parameter key="locale" value="English (United States)"/>
            <parameter key="read_all_values_as_polynominal" value="false"/>
            <list key="data_set_meta_data_information">
              <parameter key="0" value="opinion.true.polynominal.attribute"/>
              <parameter key="1" value="label.true.integer.attribute"/>
            </list>
            <parameter key="read_not_matching_values_as_missings" value="false"/>
            <parameter key="datamanagement" value="double_array"/>
            <parameter key="data_management" value="auto"/>
          </operator>
          <operator activated="true" class="nominal_to_text" compatibility="9.7.000" expanded="true" height="82" name="Nominal to Text" width="90" x="179" y="34">
            <parameter key="attribute_filter_type" value="single"/>
            <parameter key="attribute" value="opinion"/>
            <parameter key="attributes" value=""/>
            <parameter key="use_except_expression" value="false"/>
            <parameter key="value_type" value="nominal"/>
            <parameter key="use_value_type_exception" value="false"/>
            <parameter key="except_value_type" value="file_path"/>
            <parameter key="block_type" value="single_value"/>
            <parameter key="use_block_type_exception" value="false"/>
            <parameter key="except_block_type" value="single_value"/>
            <parameter key="invert_selection" value="false"/>
            <parameter key="include_special_attributes" value="false"/>
          </operator>
          <operator activated="true" breakpoints="after" class="rosette_text_toolkit:rosette_tokenize" compatibility="1.11.000" expanded="true" height="68" name="Tokenize (2)" width="90" x="313" y="34">
            <parameter key="Connection" value="NewConnection"/>
            <parameter key="Source Language" value="Persian"/>
            <parameter key="Attribute Selector" value="opinion"/>
            <parameter key="Filter stopwords" value="false"/>
          </operator>
          <operator activated="true" class="text:process_document_from_data" compatibility="9.3.001" expanded="true" height="82" name="Process Documents from Data" width="90" x="447" y="34">
            <parameter key="create_word_vector" value="true"/>
            <parameter key="vector_creation" value="TF-IDF"/>
            <parameter key="add_meta_information" value="true"/>
            <parameter key="keep_text" value="false"/>
            <parameter key="prune_method" value="none"/>
            <parameter key="prune_below_percent" value="3.0"/>
            <parameter key="prune_above_percent" value="30.0"/>
            <parameter key="prune_below_rank" value="0.05"/>
            <parameter key="prune_above_rank" value="0.95"/>
            <parameter key="datamanagement" value="double_sparse_array"/>
            <parameter key="data_management" value="auto"/>
            <parameter key="select_attributes_and_weights" value="true"/>
            <list key="specify_weights">
              <parameter key="Token" value="1.0"/>
            </list>
            <process expanded="true">
              <operator activated="true" class="open_file" compatibility="9.7.000" expanded="true" height="68" name="Open File" width="90" x="112" y="187">
                <parameter key="resource_type" value="file"/>
                <parameter key="filename" value="C:/Users/Administrator/Desktop/stopwords.txt"/>
              </operator>
              <operator activated="true" class="text:filter_stopwords_dictionary" compatibility="9.3.001" expanded="true" height="82" name="Filter Stopwords (Dictionary)" width="90" x="246" y="34">
                <parameter key="file" value="C:\Users\lione\Downloads\stopwords.txt"/>
                <parameter key="case_sensitive" value="false"/>
                <parameter key="encoding" value="SYSTEM"/>
              </operator>
              <connect from_port="document" to_op="Filter Stopwords (Dictionary)" to_port="document"/>
              <connect from_op="Open File" from_port="file" to_op="Filter Stopwords (Dictionary)" to_port="file"/>
              <connect from_op="Filter Stopwords (Dictionary)" from_port="document" to_port="document 1"/>
              <portSpacing port="source_document" spacing="0"/>
              <portSpacing port="sink_document 1" spacing="0"/>
              <portSpacing port="sink_document 2" spacing="0"/>
            </process>
          </operator>
          <connect from_op="Read Excel" from_port="output" to_op="Nominal to Text" to_port="example set input"/>
          <connect from_op="Nominal to Text" from_port="example set output" to_op="Tokenize (2)" to_port="example set"/>
          <connect from_op="Tokenize (2)" from_port="example set" to_op="Process Documents from Data" to_port="example set"/>
          <connect from_op="Process Documents from Data" from_port="example set" to_port="result 1"/>
          <portSpacing port="source_input 1" spacing="0"/>
          <portSpacing port="sink_result 1" spacing="0"/>
          <portSpacing port="sink_result 2" spacing="0"/>
        </process>
      </operator>
    </process>


  • sara20sara20 Member Posts: 110 Unicorn
    بازم کار نکرد کلا هنگ کرده اون بخش توکنایز ایراد داره مدام یا ارور میده یا هنگ میکنه ولی بازم سرچ میکنم ببینم چی پیدا میکنم لینکش براتون میفرستم

    شرمنده
  • Mohamad1367Mohamad1367 Member Posts: 22 Contributor I
    @sara20 ahan moshkeletoon bakhshe tokenization hast? baraye tokenization tooye farsi man ye extension nasb kardam be name rosette extension emkanesh hast oono nasb konin bebinin irade karam az chie?man taghriban kole in anjoman ro ziro roo kardam :/
  • sara20sara20 Member Posts: 110 Unicorn
    اونم نصب کردم ولی متاسفانه بازم کار نمیکنه :(
     هنگ کرده 
    imageimageimage
  • Mohamad1367Mohamad1367 Member Posts: 22 Contributor I
    edited June 2020
    @sara20 tooye bakhshe parameter haye in operator ye ja hast ke bayad api key begiri ke too ghesmate connection set koni oono gerefti?

    rooye oon logoye ghermez mizani ye bakhshe add connection dare ke behet api key mide

Sign In or Register to comment.