"Import list of URLs for

S1001108S1001108 Member Posts: 2 Contributor I
edited May 2019 in Help

I would like to use a csv with URLs in it as a startingpoint for the "process documents from the WEB". So instead of defining a starting point, where Rapidminer starts crawling, I would like to use the URLs from the csv. However the process has no Input.

Hope this is not a stupid question, as I am an absolute Rapidminer beginner. Regards Roman

Best Answer

  • Options
    sgenzersgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager
    Solution Accepted

    hi Roman -


    So are you sure you want to use "Process Documents from Web"? This operator is a rather specific one - it is exclusively used to deal with PDFs or text files that the crawler finds.


    In either case sure, you can use your csv of URLs no problem....something like this should get you going:


    <?xml version="1.0" encoding="UTF-8"?><process version="9.0.003">
    <operator activated="true" class="process" compatibility="9.0.003" expanded="true" name="Process">
    <process expanded="true">
    <operator activated="true" class="read_csv" compatibility="9.0.003" expanded="true" height="68" name="Read CSV" width="90" x="45" y="34">
    <list key="annotations"/>
    <list key="data_set_meta_data_information"/>
    <operator activated="true" class="concurrency:loop_values" compatibility="9.0.003" expanded="true" height="82" name="Loop Values" width="90" x="179" y="34">
    <parameter key="attribute" value="URL"/>
    <parameter key="iteration_macro" value="URL"/>
    <parameter key="enable_parallel_execution" value="false"/>
    <process expanded="true">
    <operator activated="true" class="web:process_web_modern" compatibility="7.3.000" expanded="true" height="68" name="Process Documents from Web" width="90" x="179" y="34">
    <parameter key="url" value="%{URL}"/>
    <list key="crawling_rules"/>
    <process expanded="true">
    <portSpacing port="source_document" spacing="0"/>
    <portSpacing port="sink_document 1" spacing="0"/>
    <connect from_op="Process Documents from Web" from_port="example set" to_port="output 1"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="source_input 2" spacing="0"/>
    <portSpacing port="sink_output 1" spacing="0"/>
    <portSpacing port="sink_output 2" spacing="0"/>
    <connect from_op="Read CSV" from_port="output" to_op="Loop Values" to_port="input 1"/>
    <connect from_op="Loop Values" from_port="output 1" to_port="result 1"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="sink_result 1" spacing="0"/>
    <portSpacing port="sink_result 2" spacing="0"/>



  • Options
    S1001108S1001108 Member Posts: 2 Contributor I

    Awesome help! Thank you, Roman

  • Options
    Telcontar120Telcontar120 Moderator, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn

    The operator Get Pages also does exactly what you are asking, retrieving a set of URLs from a list (input is an exampleset, but if you have the list in a csv already you can easily turn that into an exampleset with Read CSV first).  You may find that Process Documents from Web has some "quirks" that make it better to get the pages first separately and then process them using one of the other Process Documents operators.

    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
Sign In or Register to comment.