It looks like you're new here. If you want to get involved, click one of these buttons!
I am using sparkRM to create a model for big data, but this error always appere
Can you help me please
I assume that you are using a virtual machine? In this case, try increasing the total amount of memory for the VM. Then you can also edit the parameter yarn.scheduler.maximum-allocation-mb in the "Advanced Connection Properties" of the set up/configured Radoop connection.
Saw a similar issue a while back and by playing around with memory/ressource parameters I had it working. Can't remember exactly now...
I also assume it is a VM with small memory resources (quickstart VM?).
The simplest quickfix is probably to change the Radoop connection setting "Spark Resource Allocation Policy" from "Static, Heuristic Configuration" to "Static, Default Configuration".
This results in starting small containers (it is only suitable for single node test clusters).
Thank you so much your answer was really helpful
but an other problem appear after resolving the first one
can you help me please
Do you have access to the Resource Manager interface? Typically, you can access it via a web browser at <hostname>:8088. You should see a job with the Name "Single Process Pushdown" there with the corresponding StartTime. If you click on its ID, it brings up a page, where there should be more details next to Diagnostics (somewhere in the middle).
Can you share it please?
thank you for replying
Looks like you will need to go further with clicking on one of the logs link. The cause may be there, or following the here links may reveal more.
but I can't open the link you put it in the message (here links..)
I saw the log file in cloudera manager and I found this error:
[HiveServer2-Background-Pool: Thread-504]: Table radoop__tmp_lola_1499279743370_ruyrrcu not found: default.radoop__tmp_lola_1499279743370_ruyrrcu table not found
So the cloudera comunity suggest that "the users using RM can't create tables under default. Or is creating them but under a different DB but RM Radoop is looking for them under default".
So can you tell me please how to solve this in Rapidminer?
these "table not found" error messages in the log do not mean anything, they are normal operation. If you run "drop table if exists tmp_xyz" in Hive, and the table does not exist, you get this log entry despite the "if exists" part, but no error appears on the client side.
With the logs link I meant this link below the Diagnostics:
RapidMiner AI Hub
Automated Data Science
Training Classes & Certification
ML Algorithm Reference
Educational License Program