Any examples of a deep learning time series binary classification?

RepletionRepletion Member Posts: 24 Contributor I
Hello!
Im looking for an example of an deep learning time series binary classification. It would greatly help my understanding of the architecture behind them, so if anybody got an example or workflow that they are willing to share it would be much appriciated!

Best Answer

Answers

  • lionelderkrikorlionelderkrikor Moderator, RapidMiner Certified Analyst, Member Posts: 1,195 Unicorn
    Hi @Repletion,

    You can go to (in RapidMiner repository) : 
    Samples -> Deep Learning -> 02 sequential data -> 02 ICU mortality classification

    Is it what you are looking for ??

    Regards,

    Lionel

  • RepletionRepletion Member Posts: 24 Contributor I
    @lionelderkrikor to some extent yes this is what im looking for. However that dataset doesnt have an ID class and when I try and replicate it with a dataset that contains an integer (label), a date (id) and various attributes, it simply gives me the following log output: "Couldn't update network in epoch n" (n=1,2,3 ...).

    Also isnt the amount of neurons in the network supposed to be equal to the amount of attributes? Or how does that exactly work (because in my head every neuron is the weights and biases for an attribute).

  • lionelderkrikorlionelderkrikor Moderator, RapidMiner Certified Analyst, Member Posts: 1,195 Unicorn
    @Repletion,

    "to some extent yes this is what im looking for..."
    Could you share your data and explain exactly what you want to perform (or predict). So it will be easier to help you...

    Regards,  
    Lionel
  • RepletionRepletion Member Posts: 24 Contributor I
    @lionelderkrikor im trying to predict if a closing price is higher than the previous days closing price (1) if not (0). It should be pretty straight forward regarding building a binary model, however Rapidminer is different from the statistics software I have experience with and I guess it shows.

    Ignore DIA Basics. Its the wrong csv, DIA filtered is the one im working with.

    <?xml version="1.0" encoding="UTF-8"?><process version="9.6.000">
      <context>
        <input/>
        <output/>
        <macros/>
      </context>
      <operator activated="true" class="process" compatibility="9.6.000" expanded="true" name="Process">
        <parameter key="logverbosity" value="init"/>
        <parameter key="random_seed" value="2001"/>
        <parameter key="send_mail" value="never"/>
        <parameter key="notification_email" value=""/>
        <parameter key="process_duration_for_mail" value="30"/>
        <parameter key="encoding" value="SYSTEM"/>
        <process expanded="true">
          <operator activated="true" class="retrieve" compatibility="9.6.000" expanded="true" height="68" name="Retrieve DIA Basics" width="90" x="45" y="34">
            <parameter key="repository_entry" value="//Local Repository/Stock data/DIA Basics"/>
          </operator>
          <operator activated="true" class="replace_missing_values" compatibility="9.6.000" expanded="true" height="103" name="Replace Missing Values" width="90" x="112" y="340">
            <parameter key="return_preprocessing_model" value="false"/>
            <parameter key="create_view" value="false"/>
            <parameter key="attribute_filter_type" value="all"/>
            <parameter key="attribute" value=""/>
            <parameter key="attributes" value="ACCBL_20|ACCBM_20|ACCBU_20|AD|ADOSC_3_10|ADX_14|AMAT_LR_2|AMAT_SR_2|AO_5_34|AOBV_LR_2|AOBV_SR_2|APO_12_26|AROOND_14|AROONU_14|ATR_14|BBL_20|BBM_20|BBU_20|BOP|CCI_20_0.015|CG_10|close|CMF_20|CMO_14|COPC_11_14_10|DCL_10_20|DCM_10_20|DCU_10_20|DEC_1|DEMA_10|DMN_14|DMP_14|DPO_1|EFI_13|EMA_10|EOM_14_100000000|FISHERT_5|FWMA_10|high|HL2|HLC3|HMA_10|INC_1|KAMA_10_2_30|KCB_20|KCL_20|KCU_20|KST_10_15_20_30_10_10_10_15|KSTS_9|KURT_30|LDECAY_5|LOGRET_1|low|LR_14|MACD_12_26_9|MACDH_12_26_9|MACDS_12_26_9|MAD_30|MASSI_9_25|MEDIAN_30|MFI_14|MIDPOINT_2|MIDPRICE_2|MOM_10|NATR_14|NVI_1|OBV|OBV_EMA_2|OBV_EMA_4|OBV_max_2|OBV_min_2|OHLC4|open|PCTRET_1|PPO_12_26_9|PPOH_12_26_9|PPOS_12_26_9|PVI_1|PVOL|PVT|PWMA_10|QS_10|QTL_30_0.5|RMA_10|ROC_10|RSI_14|RVI_14_4|RVIS_14_4|SINWMA_14|SKEW_30|SLOPE_1|SMA_10|STDEV_30|STOCH_3|STOCH_5|STOCHF_3|STOCHF_14|SWMA_10|TEMA_10|TRIMA_10|TRUERANGE_1|TSI_13_25|UO_7_14_28|VAR_30|volume|VTXM_14|VTXP_14|VWAP|VWMA_10|WILLR_14|WMA_10|Z_30|ZLEMA_10"/>
            <parameter key="use_except_expression" value="false"/>
            <parameter key="value_type" value="attribute_value"/>
            <parameter key="use_value_type_exception" value="false"/>
            <parameter key="except_value_type" value="time"/>
            <parameter key="block_type" value="attribute_block"/>
            <parameter key="use_block_type_exception" value="false"/>
            <parameter key="except_block_type" value="value_matrix_row_start"/>
            <parameter key="invert_selection" value="false"/>
            <parameter key="include_special_attributes" value="false"/>
            <parameter key="default" value="average"/>
            <list key="columns"/>
          </operator>
          <operator activated="true" class="set_role" compatibility="9.6.000" expanded="true" height="82" name="Set Role (2)" width="90" x="246" y="340">
            <parameter key="attribute_name" value="timestamp"/>
            <parameter key="target_role" value="id"/>
            <list key="set_additional_roles"/>
          </operator>
          <operator activated="true" class="subprocess" compatibility="9.6.000" expanded="true" height="82" name="Subprocess" width="90" x="380" y="340">
            <process expanded="true">
              <operator activated="true" class="time_series:lag_series" compatibility="9.6.000" expanded="true" height="82" name="Lag" width="90" x="112" y="34">
                <list key="attributes">
                  <parameter key="close" value="1"/>
                </list>
                <parameter key="overwrite_attributes" value="false"/>
                <parameter key="extend_exampleset" value="false"/>
              </operator>
              <operator activated="true" class="generate_attributes" compatibility="9.6.000" expanded="true" height="82" name="Generate Attributes" width="90" x="313" y="34">
                <list key="function_descriptions">
                  <parameter key="Bull/Bear" value="if(close&gt;=[close-1], 1, 0)"/>
                </list>
                <parameter key="keep_all" value="true"/>
              </operator>
              <operator activated="true" class="select_attributes" compatibility="9.6.000" expanded="true" height="82" name="Select Attributes (2)" width="90" x="514" y="34">
                <parameter key="attribute_filter_type" value="subset"/>
                <parameter key="attribute" value=""/>
                <parameter key="attributes" value="close-1"/>
                <parameter key="use_except_expression" value="false"/>
                <parameter key="value_type" value="attribute_value"/>
                <parameter key="use_value_type_exception" value="false"/>
                <parameter key="except_value_type" value="time"/>
                <parameter key="block_type" value="attribute_block"/>
                <parameter key="use_block_type_exception" value="false"/>
                <parameter key="except_block_type" value="value_matrix_row_start"/>
                <parameter key="invert_selection" value="true"/>
                <parameter key="include_special_attributes" value="false"/>
              </operator>
              <operator activated="false" class="numerical_to_binominal" compatibility="9.6.000" expanded="true" height="82" name="Numerical to Binominal" width="90" x="648" y="136">
                <parameter key="attribute_filter_type" value="single"/>
                <parameter key="attribute" value="Bull/Bear"/>
                <parameter key="attributes" value=""/>
                <parameter key="use_except_expression" value="false"/>
                <parameter key="value_type" value="numeric"/>
                <parameter key="use_value_type_exception" value="false"/>
                <parameter key="except_value_type" value="real"/>
                <parameter key="block_type" value="value_series"/>
                <parameter key="use_block_type_exception" value="false"/>
                <parameter key="except_block_type" value="value_series_end"/>
                <parameter key="invert_selection" value="false"/>
                <parameter key="include_special_attributes" value="false"/>
                <parameter key="min" value="0.0"/>
                <parameter key="max" value="0.0"/>
              </operator>
              <operator activated="false" class="concurrency:join" compatibility="9.6.000" expanded="true" height="82" name="Join" width="90" x="782" y="136">
                <parameter key="remove_double_attributes" value="true"/>
                <parameter key="join_type" value="outer"/>
                <parameter key="use_id_attribute_as_key" value="true"/>
                <list key="key_attributes">
                  <parameter key="Bull/Bear" value="Bull/Bear"/>
                </list>
                <parameter key="keep_both_join_attributes" value="false"/>
              </operator>
              <connect from_port="in 1" to_op="Lag" to_port="example set input"/>
              <connect from_op="Lag" from_port="example set output" to_op="Generate Attributes" to_port="example set input"/>
              <connect from_op="Generate Attributes" from_port="example set output" to_op="Select Attributes (2)" to_port="example set input"/>
              <connect from_op="Select Attributes (2)" from_port="example set output" to_port="out 1"/>
              <portSpacing port="source_in 1" spacing="0"/>
              <portSpacing port="source_in 2" spacing="0"/>
              <portSpacing port="sink_out 1" spacing="0"/>
              <portSpacing port="sink_out 2" spacing="0"/>
            </process>
            <description align="center" color="transparent" colored="false" width="126">Create Bull/Bear</description>
          </operator>
          <operator activated="true" class="set_role" compatibility="9.6.000" expanded="true" height="82" name="Set Role" width="90" x="581" y="340">
            <parameter key="attribute_name" value="Bull/Bear"/>
            <parameter key="target_role" value="label"/>
            <list key="set_additional_roles"/>
          </operator>
          <operator activated="true" class="select_attributes" compatibility="9.6.000" expanded="true" height="82" name="Select Attributes" width="90" x="447" y="136">
            <parameter key="attribute_filter_type" value="subset"/>
            <parameter key="attribute" value=""/>
            <parameter key="attributes" value="close|high|low|MACD_12_26_9|open|RSI_14|SMA_10|STOCH_3|STOCH_5|timestamp|Bull/Bear"/>
            <parameter key="use_except_expression" value="false"/>
            <parameter key="value_type" value="attribute_value"/>
            <parameter key="use_value_type_exception" value="false"/>
            <parameter key="except_value_type" value="time"/>
            <parameter key="block_type" value="attribute_block"/>
            <parameter key="use_block_type_exception" value="false"/>
            <parameter key="except_block_type" value="value_matrix_row_start"/>
            <parameter key="invert_selection" value="false"/>
            <parameter key="include_special_attributes" value="false"/>
          </operator>
          <operator activated="true" class="time_series:normalization" compatibility="9.6.000" expanded="true" height="68" name="Normalize (Series)" width="90" x="581" y="85">
            <parameter key="attribute_filter_type" value="subset"/>
            <parameter key="attribute" value=""/>
            <parameter key="attributes" value="|Bull/Bear"/>
            <parameter key="use_except_expression" value="false"/>
            <parameter key="value_type" value="numeric"/>
            <parameter key="use_value_type_exception" value="false"/>
            <parameter key="except_value_type" value="real"/>
            <parameter key="block_type" value="value_series"/>
            <parameter key="use_block_type_exception" value="false"/>
            <parameter key="except_block_type" value="value_series_end"/>
            <parameter key="invert_selection" value="true"/>
            <parameter key="include_special_attributes" value="true"/>
            <parameter key="overwrite_attributes" value="true"/>
            <parameter key="new_attributes_postfix" value="_normalized"/>
          </operator>
          <operator activated="true" class="time_series:windowing" compatibility="9.6.000" expanded="true" height="82" name="Windowing (2)" width="90" x="715" y="85">
            <parameter key="attribute_filter_type" value="all"/>
            <parameter key="attribute" value=""/>
            <parameter key="attributes" value="close"/>
            <parameter key="use_except_expression" value="false"/>
            <parameter key="value_type" value="attribute_value"/>
            <parameter key="use_value_type_exception" value="false"/>
            <parameter key="except_value_type" value="time"/>
            <parameter key="block_type" value="attribute_block"/>
            <parameter key="use_block_type_exception" value="false"/>
            <parameter key="except_block_type" value="value_matrix_row_start"/>
            <parameter key="invert_selection" value="false"/>
            <parameter key="include_special_attributes" value="true"/>
            <parameter key="has_indices" value="true"/>
            <parameter key="indices_attribute" value="timestamp"/>
            <parameter key="window_size" value="30"/>
            <parameter key="no_overlapping_windows" value="false"/>
            <parameter key="step_size" value="1"/>
            <parameter key="create_horizon_(labels)" value="true"/>
            <parameter key="horizon_attribute" value="Bull/Bear"/>
            <parameter key="horizon_size" value="1"/>
            <parameter key="horizon_offset" value="0"/>
          </operator>
          <operator activated="true" class="split_data" compatibility="9.6.000" expanded="true" height="124" name="Split Data" width="90" x="849" y="85">
            <enumeration key="partitions">
              <parameter key="ratio" value="0.7"/>
              <parameter key="ratio" value="0.2"/>
              <parameter key="ratio" value="0.1"/>
            </enumeration>
            <parameter key="sampling_type" value="linear sampling"/>
            <parameter key="use_local_random_seed" value="false"/>
            <parameter key="local_random_seed" value="1992"/>
          </operator>
          <operator activated="true" class="collect" compatibility="9.6.000" expanded="true" height="82" name="Collect (3)" width="90" x="849" y="289">
            <parameter key="unfold" value="false"/>
          </operator>
          <operator activated="true" class="deeplearning:dl4j_timeseries_converter" compatibility="0.9.003" expanded="true" height="82" name="TimeSeries to Tensor (3)" width="90" x="983" y="289"/>
          <operator activated="true" class="collect" compatibility="9.6.000" expanded="true" height="82" name="Collect" width="90" x="1050" y="34">
            <parameter key="unfold" value="false"/>
          </operator>
          <operator activated="true" class="deeplearning:dl4j_timeseries_converter" compatibility="0.9.003" expanded="true" height="82" name="TimeSeries to Tensor" width="90" x="1184" y="34"/>
          <operator activated="true" class="collect" compatibility="9.6.000" expanded="true" height="82" name="Collect (2)" width="90" x="1050" y="187">
            <parameter key="unfold" value="false"/>
          </operator>
          <operator activated="true" class="deeplearning:dl4j_timeseries_converter" compatibility="0.9.003" expanded="true" height="82" name="TimeSeries to Tensor (2)" width="90" x="1184" y="187"/>
          <operator activated="true" class="deeplearning:dl4j_tensor_sequential_neural_network" compatibility="0.9.003" expanded="true" height="103" name="Deep Learning (Tensor)" width="90" x="1318" y="34">
            <parameter key="loss_function" value="Multiclass Cross Entropy (Classification)"/>
            <parameter key="epochs" value="50"/>
            <parameter key="use_miniBatch" value="true"/>
            <parameter key="batch_size" value="16"/>
            <parameter key="updater" value="Adam"/>
            <parameter key="learning_rate" value="0.01"/>
            <parameter key="momentum" value="0.9"/>
            <parameter key="rho" value="0.95"/>
            <parameter key="epsilon" value="1.0E-6"/>
            <parameter key="beta1" value="0.9"/>
            <parameter key="beta2" value="0.999"/>
            <parameter key="RMSdecay" value="0.95"/>
            <parameter key="weight_initialization" value="Xavier"/>
            <parameter key="bias_initialization" value="0.0"/>
            <parameter key="use_regularization" value="false"/>
            <parameter key="l1_strength" value="0.1"/>
            <parameter key="l2_strength" value="0.1"/>
            <parameter key="optimization_method" value="Stochastic Gradient Descent"/>
            <parameter key="backpropagation" value="Standard"/>
            <parameter key="backpropagation_length" value="50"/>
            <parameter key="infer_input_shape" value="true"/>
            <parameter key="network_type" value="Simple Neural Network"/>
            <parameter key="log_each_epoch" value="true"/>
            <parameter key="epochs_per_log" value="10"/>
            <parameter key="use_local_random_seed" value="false"/>
            <parameter key="local_random_seed" value="1992"/>
            <process expanded="true">
              <operator activated="true" class="deeplearning:dl4j_lstm_layer" compatibility="0.9.003" expanded="true" height="68" name="Add LSTM Layer" width="90" x="112" y="136">
                <parameter key="neurons" value="300"/>
                <parameter key="gate_activation" value="TanH"/>
                <parameter key="forget_gate_bias_initialization" value="1.0"/>
              </operator>
              <operator activated="true" class="deeplearning:dl4j_dense_layer" compatibility="0.9.003" expanded="true" height="68" name="Add Fully-Connected Layer" width="90" x="514" y="136">
                <parameter key="number_of_neurons" value="2"/>
                <parameter key="activation_function" value="Softmax"/>
                <parameter key="use_dropout" value="false"/>
                <parameter key="dropout_rate" value="0.25"/>
                <parameter key="overwrite_networks_weight_initialization" value="false"/>
                <parameter key="weight_initialization" value="Normal"/>
                <parameter key="overwrite_networks_bias_initialization" value="false"/>
                <parameter key="bias_initialization" value="0.0"/>
              </operator>
              <connect from_port="layerArchitecture" to_op="Add LSTM Layer" to_port="layerArchitecture"/>
              <connect from_op="Add LSTM Layer" from_port="layerArchitecture" to_op="Add Fully-Connected Layer" to_port="layerArchitecture"/>
              <connect from_op="Add Fully-Connected Layer" from_port="layerArchitecture" to_port="layerArchitecture"/>
              <portSpacing port="source_layerArchitecture" spacing="0"/>
              <portSpacing port="sink_layerArchitecture" spacing="0"/>
            </process>
          </operator>
          <operator activated="true" class="deeplearning:dl4j_apply_tensor_model" compatibility="0.9.003" expanded="true" height="82" name="Apply Model (Tensor)" width="90" x="1452" y="187"/>
          <operator activated="true" class="select" compatibility="9.6.000" expanded="true" height="68" name="Select" width="90" x="1519" y="85">
            <parameter key="index" value="1"/>
            <parameter key="unfold" value="false"/>
          </operator>
          <operator activated="false" class="performance_regression" compatibility="9.6.000" expanded="true" height="82" name="Performance" width="90" x="1519" y="340">
            <parameter key="main_criterion" value="first"/>
            <parameter key="root_mean_squared_error" value="true"/>
            <parameter key="absolute_error" value="false"/>
            <parameter key="relative_error" value="true"/>
            <parameter key="relative_error_lenient" value="false"/>
            <parameter key="relative_error_strict" value="false"/>
            <parameter key="normalized_absolute_error" value="false"/>
            <parameter key="root_relative_squared_error" value="false"/>
            <parameter key="squared_error" value="true"/>
            <parameter key="correlation" value="true"/>
            <parameter key="squared_correlation" value="false"/>
            <parameter key="prediction_average" value="true"/>
            <parameter key="spearman_rho" value="false"/>
            <parameter key="kendall_tau" value="false"/>
            <parameter key="skip_undefined_labels" value="true"/>
            <parameter key="use_example_weights" value="true"/>
          </operator>
          <operator activated="true" class="performance_classification" compatibility="9.6.000" expanded="true" height="82" name="Performance (2)" width="90" x="1653" y="289">
            <parameter key="main_criterion" value="first"/>
            <parameter key="accuracy" value="true"/>
            <parameter key="classification_error" value="true"/>
            <parameter key="kappa" value="false"/>
            <parameter key="weighted_mean_recall" value="false"/>
            <parameter key="weighted_mean_precision" value="false"/>
            <parameter key="spearman_rho" value="false"/>
            <parameter key="kendall_tau" value="false"/>
            <parameter key="absolute_error" value="true"/>
            <parameter key="relative_error" value="false"/>
            <parameter key="relative_error_lenient" value="false"/>
            <parameter key="relative_error_strict" value="false"/>
            <parameter key="normalized_absolute_error" value="true"/>
            <parameter key="root_mean_squared_error" value="true"/>
            <parameter key="root_relative_squared_error" value="true"/>
            <parameter key="squared_error" value="true"/>
            <parameter key="correlation" value="false"/>
            <parameter key="squared_correlation" value="false"/>
            <parameter key="cross-entropy" value="false"/>
            <parameter key="margin" value="false"/>
            <parameter key="soft_margin_loss" value="false"/>
            <parameter key="logistic_loss" value="false"/>
            <parameter key="skip_undefined_labels" value="true"/>
            <parameter key="use_example_weights" value="true"/>
            <list key="class_weights"/>
          </operator>
          <connect from_op="Retrieve DIA Basics" from_port="output" to_op="Replace Missing Values" to_port="example set input"/>
          <connect from_op="Replace Missing Values" from_port="example set output" to_op="Set Role (2)" to_port="example set input"/>
          <connect from_op="Set Role (2)" from_port="example set output" to_op="Subprocess" to_port="in 1"/>
          <connect from_op="Subprocess" from_port="out 1" to_op="Set Role" to_port="example set input"/>
          <connect from_op="Set Role" from_port="example set output" to_op="Select Attributes" to_port="example set input"/>
          <connect from_op="Select Attributes" from_port="example set output" to_op="Normalize (Series)" to_port="example set"/>
          <connect from_op="Normalize (Series)" from_port="example set" to_op="Windowing (2)" to_port="example set"/>
          <connect from_op="Windowing (2)" from_port="windowed example set" to_op="Split Data" to_port="example set"/>
          <connect from_op="Split Data" from_port="partition 1" to_op="Collect" to_port="input 1"/>
          <connect from_op="Split Data" from_port="partition 2" to_op="Collect (2)" to_port="input 1"/>
          <connect from_op="Split Data" from_port="partition 3" to_op="Collect (3)" to_port="input 1"/>
          <connect from_op="Collect (3)" from_port="collection" to_op="TimeSeries to Tensor (3)" to_port="collection"/>
          <connect from_op="TimeSeries to Tensor (3)" from_port="tensor" to_op="Apply Model (Tensor)" to_port="unlabelled tensor"/>
          <connect from_op="Collect" from_port="collection" to_op="TimeSeries to Tensor" to_port="collection"/>
          <connect from_op="TimeSeries to Tensor" from_port="tensor" to_op="Deep Learning (Tensor)" to_port="training set"/>
          <connect from_op="Collect (2)" from_port="collection" to_op="TimeSeries to Tensor (2)" to_port="collection"/>
          <connect from_op="TimeSeries to Tensor (2)" from_port="tensor" to_op="Deep Learning (Tensor)" to_port="test set"/>
          <connect from_op="Deep Learning (Tensor)" from_port="model" to_op="Apply Model (Tensor)" to_port="model"/>
          <connect from_op="Apply Model (Tensor)" from_port="labeled data" to_op="Select" to_port="collection"/>
          <connect from_op="Apply Model (Tensor)" from_port="model" to_port="result 1"/>
          <connect from_op="Select" from_port="selected" to_op="Performance (2)" to_port="labelled data"/>
          <connect from_op="Performance (2)" from_port="performance" to_port="result 2"/>
          <connect from_op="Performance (2)" from_port="example set" to_port="result 3"/>
          <portSpacing port="source_input 1" spacing="0"/>
          <portSpacing port="sink_result 1" spacing="0"/>
          <portSpacing port="sink_result 2" spacing="0"/>
          <portSpacing port="sink_result 3" spacing="0"/>
          <portSpacing port="sink_result 4" spacing="0"/>
        </process>
      </operator>
    </process>
    


  • lionelderkrikorlionelderkrikor Moderator, RapidMiner Certified Analyst, Member Posts: 1,195 Unicorn
    @Repletion

    I will study your use case.
    But after seeing your process and the description of what you want to perform, maybe you can use the following templates : 

    04 S&P 500 Regression using Windowing and Convolution
    03 gas price change regression

    Maybe you can adapt these 2 regression processes into a binary classification process

    Regards,

    Lionel



  • RepletionRepletion Member Posts: 24 Contributor I
    @hughesfleming68 drawing inspiration from that workflow I adapted mine to look like it, with some minor exceptions. Im still using the tensor based deep learning. The model trains and tests, however it runs into a classical statistical modelling issue, it simply predicts everything as down or up depending on the amount of epochs. How does one get around this problem, is there an exception that can be included in the model telling it, it cant only predict everything as the same value?
  • hughesfleming68hughesfleming68 Member Posts: 323 Unicorn
    @Repletion. Take a look at this article. It will help you with your data prep.The same concepts apply in Rapidminer.
    https://machinelearningmastery.com/how-to-scale-data-for-long-short-term-memory-networks-in-python/
  • hughesfleming68hughesfleming68 Member Posts: 323 Unicorn
    Also don't assume that LSTM is the best choice for your data. I always prefer CNN and don't overlook Gradient boosted trees for a problem like this.
  • RepletionRepletion Member Posts: 24 Contributor I
    edited June 2020
    @hughesfleming68 ok, I will take a look. Whats your argument to go with a CNN over an LSTM? In the financial markets certain patterns reoccur and those are the ones that an LSTM should be able to take into consideration. So im curious as to how a CNN will perform on this.
  • hughesfleming68hughesfleming68 Member Posts: 323 Unicorn
    edited June 2020
    @Repletion. This....https://medium.com/@raushan2807/temporal-convolutional-networks-bfea16e6d7d2

    Don't underestimate the training time advantage.
  • RepletionRepletion Member Posts: 24 Contributor I
    @hughesfleming68 unfortunately it doesnt look like that there are any TCN layers or extensions in Rapidminer. 
  • RepletionRepletion Member Posts: 24 Contributor I
    @hughesfleming68When working with Gradient boosted trees and hypertuning its parameters through the nested "optimize parameters (evolutionary)" do you have any golden rules that can help minimize the process time? Say that you know that the maximum number of trees "golden rule" is between 1-500 and so on with the rest of the parameters. It would greatly help setting some boundaries for the optimizer to help cut off some of the processing time.
Sign In or Register to comment.