Due to recent updates, all users are required to create an Altair One account to login to the RapidMiner community. Click the Register button to create your account using the same email that you have previously used to login to the RapidMiner community. This will ensure that any previously created content will be synced to your Altair One account. Once you login, you will be asked to provide a username that identifies you to other Community users. Email us at Community with questions.
"Problem crawling pages with blank spaces"
I have been using Rapidminer for a while and have some experience using web crawling without major problems. But one new assignment has me puzzled.
Url's are like this:
http:\\www.movilauto.com\toyota rav4 2012.html
http:\\www.movilauto.com\bmw 320 2013.html
I normally would used .+movilauto.+ to get these pages and it would work out pretty well. But apparently spaces are a problem.
To complicate even further the number or spaces are not fixed, sometimes there are 2 like in the previous example and sometimes there are three, like in the following example
http:\\www.movilauto.com\toyota rav4 automatic 2012.html
Any suggestions?
Tagged:
0
Answers
Hi!
Use the Encode URLs operator (in the Web Mining extension) to correctly pass the URLs.
Note: Your use of backslashes \ instead of slashes / will also break everything, so you should also replace those.
Regards,
Balázs
Thank you Balázs, for your answer.
My mistake with the backslashes, I checked in the rapidminer operator and I was using the correct slashes, it was a typing mistake while I was writing the post.
I found the Encode URLs operator but I am unsure how to use it, my process is extremely simple.
The site has few pages and the crawling operator finds de pages but it doesn't store them.
I attached the log file.
Very grateful for your help!
OK, this seems to be a limitation in the web crawler.
Your best guess is to parse the links yourself.
You get a list of pages from the crawler, these are the main pages. You can process them using Process Documents from Files (Text Processing extension) and Extract Information to get the link URLs with spaces. Then you can use Encode URLs to get the correct URL which you can access in the next step.
Regards,
Balázs