what is the logic regarding the impact of weights on prediction?
I am working on a project in which I need to predict the value of a variable based on others that in the set are part of a database, which, in turn, I am using as input in the program. For this, the tool or method that I'm using in Rapidminer is Automodel. Everything good when running the model. The algorithm that came out as the best was Gradient Boosted Trees, so I focused on that one. Once there, in the tab "Weights" certain variables (let's call them "a", "b" and "c") came out as the most influential or of major importance. Then I went to the tab "Simulator" in order to see how these variables affect the value of my target variable (suppose "y"). However, the value remains intact. I tried modifying the values of the variables that were less influential to see if any had an impact on "y". While doing this test, I came across two variables ("m" and "n") that did change the value of "y" but what seemed strange to me was that neither of them was as influential as "a", "b "or" c ". Another thing that I observed and found curious was that, in the tab "Production Model", most of the trees presented these two variables "m" and "n" as headers, but I don't know what I can conclude from it. Please, I would like someone to explain to me why this happens or what the real logic regarding the impact of weights on prediction is, and why certain variables that are hardly influential at all do cause an impact. I hope you can help me. Thanks in advance.