How to get the latest record for each id?

tayjy_wp16tayjy_wp16 Member Posts: 1 Newbie
For example, consider the following set of data rows:

ID          Attribute1          
1            25                    
1            16 
3            45      
2            34                  
1            35   
2            54   
3            33   

What operator should I use to get the latest value as shown below?
 ID          Attribute1          
1            35      
2            34                    
3            33    


  • Options
    kaymankayman Member Posts: 662 Unicorn
    edited June 2019
    The sort operator. 
    Sort on Attribute1 descending. 

    If you need to filter out the latest value you can then use the filter example range operator, and add 1 for first and last example. 
  • Options
    lionelderkrikorlionelderkrikor Moderator, RapidMiner Certified Analyst, Member Posts: 1,195 Unicorn
    Hi @tayjy_wp16,

    You can use a Sort operator and then a Loop Values operator associated to Filter Examples and Filter Example Range operators : 
    Here the process : 

    <?xml version="1.0" encoding="UTF-8"?><process version="9.3.000">
      <operator activated="true" class="process" compatibility="9.3.000" expanded="true" name="Process">
        <parameter key="logverbosity" value="init"/>
        <parameter key="random_seed" value="2001"/>
        <parameter key="send_mail" value="never"/>
        <parameter key="notification_email" value=""/>
        <parameter key="process_duration_for_mail" value="30"/>
        <parameter key="encoding" value="SYSTEM"/>
        <process expanded="true">
          <operator activated="true" class="utility:create_exampleset" compatibility="9.3.000" expanded="true" height="68" name="Create ExampleSet" width="90" x="179" y="85">
            <parameter key="generator_type" value="comma separated text"/>
            <parameter key="number_of_examples" value="100"/>
            <parameter key="use_stepsize" value="false"/>
            <list key="function_descriptions"/>
            <parameter key="add_id_attribute" value="false"/>
            <list key="numeric_series_configuration"/>
            <list key="date_series_configuration"/>
            <list key="date_series_configuration (interval)"/>
            <parameter key="date_format" value="yyyy-MM-dd HH:mm:ss"/>
            <parameter key="time_zone" value="SYSTEM"/>
            <parameter key="input_csv_text" value="Id, att_1&#10;1,25&#10;1,16&#10;3,45&#10;2,34&#10;1,35&#10;2,54&#10;3,33"/>
            <parameter key="column_separator" value=","/>
            <parameter key="parse_all_as_nominal" value="false"/>
            <parameter key="decimal_point_character" value="."/>
            <parameter key="trim_attribute_names" value="true"/>
          <operator activated="true" class="sort" compatibility="9.3.000" expanded="true" height="82" name="Sort" width="90" x="313" y="85">
            <parameter key="attribute_name" value="Id"/>
            <parameter key="sorting_direction" value="increasing"/>
          <operator activated="true" class="numerical_to_polynominal" compatibility="9.3.000" expanded="true" height="82" name="Numerical to Polynominal" width="90" x="447" y="85">
            <parameter key="attribute_filter_type" value="single"/>
            <parameter key="attribute" value="Id"/>
            <parameter key="attributes" value=""/>
            <parameter key="use_except_expression" value="false"/>
            <parameter key="value_type" value="numeric"/>
            <parameter key="use_value_type_exception" value="false"/>
            <parameter key="except_value_type" value="real"/>
            <parameter key="block_type" value="value_series"/>
            <parameter key="use_block_type_exception" value="false"/>
            <parameter key="except_block_type" value="value_series_end"/>
            <parameter key="invert_selection" value="false"/>
            <parameter key="include_special_attributes" value="false"/>
          <operator activated="true" class="concurrency:loop_values" compatibility="9.3.000" expanded="true" height="82" name="Loop Values" width="90" x="581" y="85">
            <parameter key="attribute" value="Id"/>
            <parameter key="iteration_macro" value="loop_value"/>
            <parameter key="reuse_results" value="false"/>
            <parameter key="enable_parallel_execution" value="true"/>
            <process expanded="true">
              <operator activated="true" class="filter_examples" compatibility="9.3.000" expanded="true" height="103" name="Filter Examples" width="90" x="313" y="34">
                <parameter key="parameter_expression" value=""/>
                <parameter key="condition_class" value="custom_filters"/>
                <parameter key="invert_filter" value="false"/>
                <list key="filters_list">
                  <parameter key="filters_entry_key" value="Id.contains.%{loop_value}"/>
                <parameter key="filters_logic_and" value="true"/>
                <parameter key="filters_check_metadata" value="true"/>
              <operator activated="true" class="extract_macro" compatibility="9.3.000" expanded="true" height="68" name="Extract Macro (2)" width="90" x="447" y="34">
                <parameter key="macro" value="numExamples"/>
                <parameter key="macro_type" value="number_of_examples"/>
                <parameter key="statistics" value="average"/>
                <parameter key="attribute_name" value=""/>
                <list key="additional_macros"/>
              <operator activated="true" class="filter_example_range" compatibility="9.3.000" expanded="true" height="82" name="Filter Example Range" width="90" x="648" y="34">
                <parameter key="first_example" value="%{numExamples}"/>
                <parameter key="last_example" value="%{numExamples}"/>
                <parameter key="invert_filter" value="false"/>
              <connect from_port="input 1" to_op="Filter Examples" to_port="example set input"/>
              <connect from_op="Filter Examples" from_port="example set output" to_op="Extract Macro (2)" to_port="example set"/>
              <connect from_op="Extract Macro (2)" from_port="example set" to_op="Filter Example Range" to_port="example set input"/>
              <connect from_op="Filter Example Range" from_port="example set output" to_port="output 1"/>
              <portSpacing port="source_input 1" spacing="0"/>
              <portSpacing port="source_input 2" spacing="0"/>
              <portSpacing port="sink_output 1" spacing="0"/>
              <portSpacing port="sink_output 2" spacing="0"/>
          <operator activated="true" class="append" compatibility="9.3.000" expanded="true" height="82" name="Append" width="90" x="715" y="85">
            <parameter key="datamanagement" value="double_array"/>
            <parameter key="data_management" value="auto"/>
            <parameter key="merge_type" value="all"/>
          <connect from_op="Create ExampleSet" from_port="output" to_op="Sort" to_port="example set input"/>
          <connect from_op="Sort" from_port="example set output" to_op="Numerical to Polynominal" to_port="example set input"/>
          <connect from_op="Numerical to Polynominal" from_port="example set output" to_op="Loop Values" to_port="input 1"/>
          <connect from_op="Loop Values" from_port="output 1" to_op="Append" to_port="example set 1"/>
          <connect from_op="Append" from_port="merged set" to_port="result 1"/>
          <portSpacing port="source_input 1" spacing="0"/>
          <portSpacing port="sink_result 1" spacing="0"/>
          <portSpacing port="sink_result 2" spacing="0"/>
    Hope this helps,



  • Options
    Telcontar120Telcontar120 Moderator, RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn
    It also depends on what you mean by "latest" value.  If you mean that the records are continuously inserted in your data in some kind of chronological order, you can also use Generate ID (this will simply generate a new id column that is numbered sequentially) and then use Aggregate to group by your original ID and take the max of the new ID.  This can then be used to join back to your original dataset to retrieve only the records you want.

    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
Sign In or Register to comment.