Due to recent updates, all users are required to create an Altair One account to login to the RapidMiner community. Click the Register button to create your account using the same email that you have previously used to login to the RapidMiner community. This will ensure that any previously created content will be synced to your Altair One account. Once you login, you will be asked to provide a username that identifies you to other Community users. Email us at Community with questions.
"Import list of URLs for
I would like to use a csv with URLs in it as a startingpoint for the "process documents from the WEB". So instead of defining a starting point, where Rapidminer starts crawling, I would like to use the URLs from the csv. However the process has no Input.
Hope this is not a stupid question, as I am an absolute Rapidminer beginner. Regards Roman
Tagged:
0
Best Answer
-
sgenzer Administrator, Moderator, Employee, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager
hi Roman -
So are you sure you want to use "Process Documents from Web"? This operator is a rather specific one - it is exclusively used to deal with PDFs or text files that the crawler finds.
In either case sure, you can use your csv of URLs no problem....something like this should get you going:
<?xml version="1.0" encoding="UTF-8"?><process version="9.0.003">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="9.0.003" expanded="true" name="Process">
<process expanded="true">
<operator activated="true" class="read_csv" compatibility="9.0.003" expanded="true" height="68" name="Read CSV" width="90" x="45" y="34">
<list key="annotations"/>
<list key="data_set_meta_data_information"/>
</operator>
<operator activated="true" class="concurrency:loop_values" compatibility="9.0.003" expanded="true" height="82" name="Loop Values" width="90" x="179" y="34">
<parameter key="attribute" value="URL"/>
<parameter key="iteration_macro" value="URL"/>
<parameter key="enable_parallel_execution" value="false"/>
<process expanded="true">
<operator activated="true" class="web:process_web_modern" compatibility="7.3.000" expanded="true" height="68" name="Process Documents from Web" width="90" x="179" y="34">
<parameter key="url" value="%{URL}"/>
<list key="crawling_rules"/>
<process expanded="true">
<portSpacing port="source_document" spacing="0"/>
<portSpacing port="sink_document 1" spacing="0"/>
</process>
</operator>
<connect from_op="Process Documents from Web" from_port="example set" to_port="output 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="source_input 2" spacing="0"/>
<portSpacing port="sink_output 1" spacing="0"/>
<portSpacing port="sink_output 2" spacing="0"/>
</process>
</operator>
<connect from_op="Read CSV" from_port="output" to_op="Loop Values" to_port="input 1"/>
<connect from_op="Loop Values" from_port="output 1" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
</process>Scott
0
Answers
Awesome help! Thank you, Roman
The operator Get Pages also does exactly what you are asking, retrieving a set of URLs from a list (input is an exampleset, but if you have the list in a csv already you can easily turn that into an exampleset with Read CSV first). You may find that Process Documents from Web has some "quirks" that make it better to get the pages first separately and then process them using one of the other Process Documents operators.
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts