Data to Documents Operator in Rapidminer 5

munromunro Member Posts: 2 Contributor I
edited November 2018 in Help

I am trying to convert the rows of a single column Excel file into separate documents for text mining. I can successfully use the Data to Documents Operator to convert the rows into an IO Object Collection, and can see that each document in the collection corresponds to each row in the spreadsheet, but cannot figure out how to process this collection further to create word vectors for subsequent model building.

I have both looked for Operators to write the documents to a folder so that I can use the Process Documents from File Operator, and other Operators to process the IO Object Collection without writing the documents into a file.

Can anyone help me out?



  • Options
    SebastianLohSebastianLoh Member Posts: 99 Contributor II
    Hi munro,

    maybe the Process Documents operator is the answer to your question. The operator creates the word vectors and processes them internal. Otherwise please post you process

    Ciao Sebastian
    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <process version="5.0">
      <operator activated="true" class="process" expanded="true" name="Process">
        <process expanded="true" height="476" width="681">
          <operator activated="true" class="retrieve" expanded="true" height="60" name="Retrieve" width="90" x="45" y="75">
            <parameter key="repository_entry" value="//Samples/data/Golf"/>
          <operator activated="true" class="text:data_to_documents" expanded="true" height="60" name="Data to Documents" width="90" x="179" y="75">
            <list key="specify_weights"/>
          <operator activated="true" class="text:process_documents" expanded="true" height="94" name="Process Documents" width="90" x="514" y="120">
            <process expanded="true" height="583" width="773">
              <operator activated="true" class="text:tokenize" expanded="true" height="60" name="Tokenize" width="90" x="246" y="30"/>
              <connect from_port="document" to_op="Tokenize" to_port="document"/>
              <connect from_op="Tokenize" from_port="document" to_port="document 1"/>
              <portSpacing port="source_document" spacing="0"/>
              <portSpacing port="sink_document 1" spacing="0"/>
              <portSpacing port="sink_document 2" spacing="0"/>
          <connect from_op="Retrieve" from_port="output" to_op="Data to Documents" to_port="example set"/>
          <connect from_op="Data to Documents" from_port="documents" to_op="Process Documents" to_port="documents 1"/>
          <connect from_op="Process Documents" from_port="example set" to_port="result 1"/>
          <portSpacing port="source_input 1" spacing="0"/>
          <portSpacing port="sink_result 1" spacing="90"/>
          <portSpacing port="sink_result 2" spacing="54"/>
Sign In or Register to comment.