Short anwer: Mixed Euclidean Distance. Why ? It is the metric commonly used.

Long answer: Selecting a metric means you define whether two Examples are similar(*) or not. Which metric is "right" has the quality of philosophical discussion. The metric has a lot of influence on the following learning operations, so choosing the right one is crucial. This picture will illustrate the similarity problem:

You know what similar is, when you see it ! But how define mathematically...?

Okay, seriously: All the metrics available have different properties and choosing the right one depends on the data the metric is for. So ... given the current state of information, we are not able to make a wise suggestion and listing all properties of all metrics...I do not think I can/will do this ;D. But: The Mixed Euclidean Distance works for the general case...

greetings

Steffen

*although similarity "not equals" metric in the literature. I use this term here to ease the explanation.

Take a look at the class com.rapidminer.operator.similarity.SimilarityUtil

There it is possbile to create a SimilarityMeasure using resolveSimilarityMeasure, the distance/similarity is finally calculated using the method similarity(String x, String y) in the class SimilarityMeasure

It´s not my post but I want to ask something by the way.

In the similarity(string x, string y) x and y are IDs for the examples. But the examples must be in the same exampleSet: the exampleSet passed to the similarity init method.

The only option I see to compute similarity between two examples of two different exampleSets is to merge both exmapleSets. Is there any other? ???

Thanks in advance.

F.J. Cuberos

0

Options

IngoRMAdministrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University ProfessorPosts: 1,751 RM Founder

Hello,

in cases where the similarity measure extends "AbstractValueBasedSimilarity", you could cast to this measure and could then access methods like:

public double similarity(Example x, Example y);

This method does not rely on the init example set and on IDs at all.

About the IDs (this is also a connection to the other thread): using IDs for this stuff was intended by the original author to make things more easy to access and actually also for avoiding recalculations. However, it turned out that this is not the case - at least not for larger data sets - and that there are other constraints like the problems for different example sets like you have mentioned them. For that reason, we decided to revise the similarity calculations and we already started with this by using the KNN learner as example. This revision will definitely be finished until the next relase and then it will be easier to access the similarity measures than it is now (although the method should work...)

do not give the same distance value. I reviewed them, they implemented in the same way. ???

Is the way of dataset representation related (represented in rapid-i as a double array [datamangment = double_array]) to this difference !? ???

-- Motaz K. Saad

0

Options

IngoRMAdministrator, Moderator, Employee, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University ProfessorPosts: 1,751 RM Founder

Hi,

hmmh, the data representation could be an explanation but I actually do not believe it. In Weka, data is always represented as double. Missing values could also make a difference of course but you probably checked that. It could be a bug in one of the implementations of course.

However, I just tried it inside of RapidMiner on a very simple example:

## Answers

347MavenShort anwer:Mixed Euclidean Distance. Why ? It is the metric commonly used.

Long answer:Selecting a metric means you define whether two Examples are similar(*) or not. Which metric is "right" has the quality of philosophical discussion. The metric has a lot of influence on the following learning operations, so choosing the right one is crucial. This picture will illustrate the similarity problem:

You know what similar is, when you see it !But how define mathematically...?Okay, seriously:

All the metrics available have different properties and choosing the right one depends on the data the metric is for. So ... given the current state of information, we are not able to make a wise suggestion and listing all properties of all metrics...I do not think I can/will do this ;D. But: The Mixed Euclidean Distance works for the general case...

greetings

Steffen

*although similarity "not equals" metric in the literature. I use this term here to ease the explanation.

42MavenEuclidean distance metric is popular one

What I meant is what is the JAVAmethod should I use to to compute distance between 2 examples [ com.rapidminer.example.Example ]

347MavenTake a look at the class

com.rapidminer.operator.similarity.SimilarityUtilThere it is possbile to create a

SimilarityMeasureusingresolveSimilarityMeasure, the distance/similarity is finally calculated using the methodsimilarity(String x, String y)in the classSimilarityMeasurehope this is the information you are looking for

greetings

Steffen

18MavenIn the similarity(string x, string y) x and y are IDs for the examples. But the examples must be in the same exampleSet: the exampleSet passed to the similarity init method.

The only option I see to compute similarity between two examples of two different exampleSets is to merge both exmapleSets. Is there any other? ???

Thanks in advance.

F.J. Cuberos

1,751RM Founderin cases where the similarity measure extends "AbstractValueBasedSimilarity", you could cast to this measure and could then access methods like:

public double similarity(Example x, Example y);

This method does not rely on the init example set and on IDs at all.

About the IDs (this is also a connection to the other thread): using IDs for this stuff was intended by the original author to make things more easy to access and actually also for avoiding recalculations. However, it turned out that this is not the case - at least not for larger data sets - and that there are other constraints like the problems for different example sets like you have mentioned them. For that reason, we decided to revise the similarity calculations and we already started with this by using the KNN learner as example. This revision will definitely be finished until the next relase and then it will be easier to access the similarity measures than it is now (although the method should work...)

Cheers,

Ingo

42MavenThe Euclidean distance in WEKA and Euclidean distance in Rapid-i do not give the same distance value. I reviewed them, they implemented in the same way. ???

Is the way of dataset representation related (represented in rapid-i as a double array [datamangment = double_array]) to this difference !? ???

--

Motaz K. Saad

1,751RM Founderhmmh, the data representation could be an explanation but I actually do not believe it. In Weka, data is always represented as double. Missing values could also make a difference of course but you probably checked that. It could be a bug in one of the implementations of course.

However, I just tried it inside of RapidMiner on a very simple example: containing only the two examples Att1Att22.4676120099825497.26712695388118851.2924127751628518-8.624734314791924

Cheers,

Ingo