About Data Pipeline Structure
I am using Rapidminer for a big data analysis project and I only used execute Python operators to construct my workflow. What the workflow does is basically read a big pandas data frame at the very beginning and process it row by row in the following operators. I realize that each operator only starts when the previous operator finishes all of the rows. Is it possible that when a row is finished processing it can be immediately passed to the next operator? i.e. Does Rapidminer supports data pipeline?