Options

Optimizing Parameters with RM 5

GarettGarett Member Posts: 10 Contributor II
edited November 2018 in Help
Hi,
Before Rapidminer 5, the parameters would change in the Learner. Now, it seems that we have to use a Set Parameter object. This doesn't seem to work below. Will I need two learners, or am I naming the objects wrong in SetParameter. Thanks in advance!
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="5.0">
  <context>
    <input/>
    <output/>
    <macros/>
  </context>
  <operator activated="true" class="process" expanded="true" name="Root">
    <description>This process tries to find the best selection threshold for the weights provided by a SVM learner. The weights and the example set are given to a parameter optimization. The parameter &amp;quot;weight&amp;quot; of the Selection operator is optimized with a grid search. The performance of this threshold is evaluated with the cross validation building block. Please refer to the meta sample processes for further details regarding the parameter optimization operators.</description>
    <process expanded="true" height="604" width="846">
      <operator activated="true" class="retrieve" expanded="true" height="60" name="Retrieve" width="90" x="45" y="30">
        <parameter key="repository_entry" value="../../data/Weighting"/>
      </operator>
      <operator activated="true" class="support_vector_machine" expanded="true" height="112" name="InitialWeights" width="90" x="179" y="30">
        <parameter key="scale" value="false"/>
      </operator>
      <operator activated="true" class="optimize_parameters_grid" expanded="true" height="112" name="GridParameterOptimization" width="90" x="380" y="30">
        <list key="parameters">
          <parameter key="Selection.weight" value="0.5,0.25,0.2,0.0"/>
          <parameter key="JMySVMLearner.C" value="[1.0;100.0;10;linear]"/>
        </list>
        <process expanded="true" height="604" width="846">
          <operator activated="true" class="select_by_weights" expanded="true" height="94" name="Selection" width="90" x="45" y="30">
            <parameter key="weight" value="0.0"/>
          </operator>
          <operator activated="true" class="x_validation" expanded="true" height="130" name="XValidation" width="90" x="179" y="30">
            <process expanded="true">
              <operator activated="true" class="support_vector_machine" expanded="true" name="JMySVMLearner">
                <parameter key="C" value="90.10000000000001"/>
              </operator>
              <connect from_port="training" to_op="JMySVMLearner" to_port="training set"/>
              <connect from_op="JMySVMLearner" from_port="model" to_port="model"/>
              <connect from_op="JMySVMLearner" from_port="weights" to_port="through 1"/>
              <portSpacing port="source_training" spacing="0"/>
              <portSpacing port="sink_model" spacing="0"/>
              <portSpacing port="sink_through 1" spacing="0"/>
              <portSpacing port="sink_through 2" spacing="0"/>
            </process>
            <process expanded="true">
              <operator activated="true" class="apply_model" expanded="true" name="ModelApplier">
                <list key="application_parameters"/>
              </operator>
              <operator activated="true" class="performance_classification" expanded="true" name="ClassificationPerformance">
                <parameter key="classification_error" value="true"/>
                <list key="class_weights"/>
              </operator>
              <connect from_port="model" to_op="ModelApplier" to_port="model"/>
              <connect from_port="test set" to_op="ModelApplier" to_port="unlabelled data"/>
              <connect from_port="through 1" to_port="averagable 2"/>
              <connect from_op="ModelApplier" from_port="labelled data" to_op="ClassificationPerformance" to_port="labelled data"/>
              <connect from_op="ClassificationPerformance" from_port="performance" to_port="averagable 1"/>
              <portSpacing port="source_model" spacing="0"/>
              <portSpacing port="source_test set" spacing="0"/>
              <portSpacing port="source_through 1" spacing="0"/>
              <portSpacing port="source_through 2" spacing="0"/>
              <portSpacing port="sink_averagable 1" spacing="0"/>
              <portSpacing port="sink_averagable 2" spacing="0"/>
              <portSpacing port="sink_averagable 3" spacing="0"/>
            </process>
          </operator>
          <connect from_port="input 1" to_op="Selection" to_port="example set input"/>
          <connect from_port="input 2" to_op="Selection" to_port="weights"/>
          <connect from_op="Selection" from_port="example set output" to_op="XValidation" to_port="training"/>
          <connect from_op="XValidation" from_port="averagable 1" to_port="performance"/>
          <portSpacing port="source_input 1" spacing="0"/>
          <portSpacing port="source_input 2" spacing="0"/>
          <portSpacing port="source_input 3" spacing="0"/>
          <portSpacing port="sink_performance" spacing="0"/>
          <portSpacing port="sink_result 1" spacing="0"/>
          <portSpacing port="sink_result 2" spacing="0"/>
        </process>
      </operator>
      <operator activated="true" class="set_parameters" expanded="true" height="60" name="Set Parameters" width="90" x="581" y="120">
        <list key="name_map">
          <parameter key="JMySVMLearner.C" value="JMySVMLearner"/>
        </list>
      </operator>
      <connect from_op="Retrieve" from_port="output" to_op="InitialWeights" to_port="training set"/>
      <connect from_op="InitialWeights" from_port="weights" to_op="GridParameterOptimization" to_port="input 2"/>
      <connect from_op="InitialWeights" from_port="exampleSet" to_op="GridParameterOptimization" to_port="input 1"/>
      <connect from_op="GridParameterOptimization" from_port="performance" to_port="result 1"/>
      <connect from_op="GridParameterOptimization" from_port="parameter" to_op="Set Parameters" to_port="parameter set"/>
      <connect from_op="GridParameterOptimization" from_port="result 1" to_port="result 3"/>
      <portSpacing port="source_input 1" spacing="0"/>
      <portSpacing port="sink_result 1" spacing="0"/>
      <portSpacing port="sink_result 3" spacing="0"/>
      <portSpacing port="sink_result 2" spacing="0"/>
    </process>
  </operator>
</process>

Answers

  • Options
    landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    the parameters are adapted for the specified operators inside the parameter optimization. Otherwise optimization would not be possible. If you want to use these optimized settings later on for anything, you might use the Set Parameters operator to recreate the optimal parameter combination on any other operator with the same parameters.

    Greetings,
      Sebastian
  • Options
    GarettGarett Member Posts: 10 Contributor II
    Thanks for your reply. Yes, I'm aware I need to utilize the SetParameters object but how do I use it? The documentation says two learners will be needed, but how are the ports linked? Or would we have two sets of example sets and learners that are unconnected?

    The tutorial was very helpful, but I didn't see a specific example of this, does anyone have an example. Thanks guys.
  • Options
    landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    the ParameterSet does only contain Parameters, no information about connections. It simply cannot know on which example set you are going to learn the model. So you simply have to specify on the left the name of the source operator, which is one of the operators, whose parameters have been optimized during the ParameterOptimization. The right field takes a name of an operator inside the process, which recieves all compatible parameter values in the parameter set.
    I just now noticed that the tooltips in the list dialog aren't very helpful. I updated them, so that they will be correct in the final version.

    Greetings,
      Sebastian
Sign In or Register to comment.