# Calculating Method/Substep Efficiency



## Dane man (Oct 20, 2014)

I was thinking about the methods and substeps that have been invented for solving the cube, and the many factors that are involved with how well and how fast those methods can achieve the solved state. It's not an exact science of course, but it's something.

In essence I wanted to find a quantifiable way to measure the "efficiency" of any given method or substep. Efficiency in the sense of amount of the cube solved, relative to speed (in moves), memorization, and mental effort required. In other words, how much bang you get for your buck.

Now, this _isn't_ supposed to measure which methods can achieve the fastest solve, because the more mental effort put forth, the faster it can be solved, but at a certain point, the amount of bang you get for your buck is severely reduced.

For example, God's algorithm. If it were humanly possible to do, it would be the fastest solving method, but even then, it would not be the most efficient. Why? Because it would require such mental capacity for either memorization or intuitive solving as to be mortally impossible, and therefore is perhaps the most inefficient method of solving the cube because you get very little benefit for the amount of work and resources applied.

On the other end of the spectrum, you have the Turn-Check-If-Solved method, where each step is literally to turn the cube randomly, and after each move, check to see if the cube is in the solved state. While this requires absolutely no mental capacity or skill, the number of moves required on average to achieve a solved state would be ridiculously high, therefore making this method extremely inefficient as well.

And so this formula isn't supposed to be used as a measurement of which method is the best or which method can solve the fastest, because obviously the fastest is God's Algorithm. It's about measuring the efficiency of effort required to achieve a solved state in as few moves as possible.

Here is the formula:

* 2.5*Orientations + 2.5*Permutations
----------------------------------------------------------------
Algorithms + 2(AvgMoves)+(ReplaceAlgs/IntuitiveCubies)*

Orientations = Number of cubies correctly oriented by the method/substep
Permutations = Number of cubies correctly permuted by the method/substep
Algorithms = Number of memorized algorithms required (mirrors, etc. included)
AvgMoves = Number of moves on average executed before achieving the desired state
ReplaceAlgs = Number of algorithms that would be required to replace the intuitive step(s)
IntuitiveCubies = Number of cubies solved using intuitive methods

Now to explain the reasoning behind the organization of this formula. 

First the numerator of the formula is a measurement of the percentage of the cube solved. Meaning that if a method solves all 20 cubies, then the numerator will be exactly 100. So, for any full method calculation, the numerator is 100.

Now the denominator. The greater the values in the denominator, the less efficient the method/substep is considered. The following values considered negative to efficiency are:
-Algorithms
-Avg Moves
-Intuitive ineffectiveness = (ReplaceAlgs/IntuitiveCubies)

*Algorithms*: Now obviously, the more algorithms required to achieve any desired state, the less efficient the method is. If one method solves 8 cubies with 10 algs, and another solves 8 cubies with 2 algs, then the second method is more efficient (ignoring other variables).

*Avg Moves*: Another obvious one. The more moves required on average, the less efficient it is. It's multiplied by 2 because I needed to give the number of moves a good weight in terms of efficiency value (again, this is an inexact science).

*Intuitive ineffectiveness*: This one isn't so obvious. There are many substeps that are intuitive, and I needed a way of measuring their effectiveness numerically. So I did the following: I assumed that, the more algorithms that would be required to replace an intuitive step, the more logic/brain-power the step required, therefore it has an increase in inefficient resource usage. But at the same time, the number of cubies that end up solved using this step is important. The more cubies that it solves, the more effective it is, therefore it reduces the ineffectiveness. (An orientation or a permutation is worth 0.5 cubies)

That's the formula. Hopefully I've explained it in a way that makes sense. It could probably be better, but an inexact science is very difficult to "better".

Now there are a few things that need to be estimated, for example the number of algorithms that would replace solving the cross. I estimated about 6 for the purpose of having a numerical base. This gives it an intuitive ineffectiveness of (6/4). Compare that to F2L, which has an intuitive ineffectiveness of (41/8).

Another thing that would need to be estimated is the algorithms that would replace EOLine, or Block Building. In fact, it appears that the first steps of solving the cube in any method do not have any really solid way of measuring the number of algorithms that would replace intuitive execution. Again (again), it's an inexact science.


Now when it comes to measuring substeps, there will be a rather clear decline in efficiency the further into the solve the substep lies. This is because, the more cubies you have that are solved, the more limited you are in solving others efficiently in regards to the mental effort required and the number of moves executed. Therefore the substeps Cross and F2L will be much more efficient than OLL or PLL.

So as a few examples, I'm going to work out the efficiencies of the CFOP method and it's substeps.

*CFOP*
100/(78+2(54.8)+(47/12)) = 100/(78+109.6+3.91667) = 100/191.51667 = *0.52215*

*Cross*
20/(0+2(6.5)+(6/4)) = 20/(0+13+1.5) = 20/14.5 = *1.37931*

*F2L(intuitive)*
40/(0+2(26.8)+(41/8)) = 40/(0+53.6+5.125) = 40/58.725 = *0.68114*

*OLL*
20/(57+2(9.7)+0) = 20/(57+19.4) = 20/76.4 = *0.26178*

*PLL*
20/(21+2(11.8)+0) = 20/(21+23.6) = 20/44.6 = *0.44843*

You can even combine substeps and measure their combined efficiency:
*OLL+PLL*
40/(78+2(21.5)+0) = 40/(78+43) = 40/121 = *0.33058*


What do you think?


----------



## Dane man (Oct 20, 2014)

The estimated distribution of data.

If we were to graph the points of data for the many methods and substeps that have been invented (x = Effort Required, y = Efficiency), then we would likely end up with a rough bell curve:


And interesting fact is that these data points won't exactly be in a perfect line, some dots will be above the bell curve, some below. We would see that some methods might be less efficient than other methods that require the same effort.

The more effort you put in past a certain point, the less bang you get for your buck. So once you've gone "over the hill", so to speak, the effort you add gives you less and less benefit. If we were to try to measure the estimated total benefit any amount of effort mathematically, we would find the closest fitting bell curve to the data, then calculate the integral from 0 to the level of effort specified.





PJKCuber said:


> You deserve a PhD in cube theory.


Hehe. It was just a random idea I had in the middle of the night. I'm not sure how well it really works when applied though.


----------



## Kirjava (Oct 20, 2014)

Dane man said:


> it's an inexact science.



Sums it up.

However, if you view the results with a degree of skepticism you can possibly grep somewhat meaningful information. 

The problem is the difficulty in quantifying various elements - things like "Intuitive ineffectiveness" can be largely subjective aside from being hard to quantify in the first place. If you want to improve the accuracy of this it may be useful to include other elements like recognition et al. If you do /that/ the problem then becomes an issue of balancing each element to ensure they are proportionally representative. Maybe you can create and plug arbitrary methods that you have a general idea of the efficiency of into the formula to check how accurate it is. Would require a lot of work and research. 

Even after you considering all this, will the results be anything more than meaningless? I've always considered something like this too deep and subjective to analyse effectively, but maybe you can convince me otherwise.


----------



## GuRoux (Oct 20, 2014)

i don't really understand the replace algs part. you put that 6 algs are needed to replace intuitive cross building? that's far too few.


----------



## Dane man (Oct 20, 2014)

Kirjava said:


> If you do /that/ the problem then becomes an issue of balancing each element to ensure they are proportionally representative. Maybe you can create and plug arbitrary methods that you have a general idea of the efficiency of into the formula to check how accurate it is. Would require a lot of work and research.



Exactly. Unfortunately, there is a lot that would be subjective when trying to compare the relative effectiveness of the variables. For example, how many added algorithms is equal to how many additional moves, efficiency wise? Or how many added algs is equal to what added amount of intuitive ineffectiveness? That's something that is hard to calculate, and many people would have different opinions on what is equal to what, therefore, I decided to leave all but the average moves in their raw form.

Even so, the data appears to have a very logical progression and approximately correct results when applied to many methods and substeps (One example, I compared my 2Look BLL to OLL/PLL. BLL got 0.28945, OLL/PLL got 0.33058. So while BLL's average move count is 2 less, it also has substantially more algorithms to memorize, and is thus less efficient). The place where it will be most accurate is with the LL methods and substeps.

It's not perfect, but it's something interesting to attempt to measure.


----------



## Dane man (Oct 20, 2014)

GuRoux said:


> i don't really understand the replace algs part. you put that 6 algs are needed to replace intuitive cross building? that's far too few.



My estimate is based on the following. I divide the cross into four parts, each piece being one. Then I estimated the number of algorithms that would be required to put one piece in place. I came up with 5 ((U move) R, (U move) R', (U move) R2, R (U move) F, R (U move) R'), then added one because the U move could be U/U' or U2. Because the cross requires so little mental effort, and can be taught using those algorithms to a small child, I feel the number I came up with is accurate. 

If you have another method of estimation, I would love to see it.


----------



## Stefan (Oct 20, 2014)

I haven't read it all, but your Turn-Check-If-Solved method just gave me an idea for a simple method. Let A1 be an 8-cycle of corners, A2 be a 7-cycle of the corners except DFR, etc. I think you'll get the idea.

Step 1: Repeat A1 until the DFR corner is at DFR (any orientation).
Step 2: Repeat R F until DFR is solved.
Step 3: Repeat A2 until the UFL corner is at UFL (any orientation).
Step 4: Repeat U L until UFL is solved.
...
Step 37: If the cube isn't solved yet, do M' U M' U M' U2 M U M U M U2.


----------



## Jakube (Oct 20, 2014)

Stefan said:


> I haven't read it all, but your Turn-Check-If-Solved method just gave me an idea for a simple method. Let A1 be an 8-cycle of corners, A2 be a 7-cycle of the corners except DFR, etc. I think you'll get the idea.
> 
> Step 1: Repeat A1 until the DFR corner is at DFR (any orientation).
> Step 2: Repeat R F until DFR is solved.
> ...



Are sure about the number of steps. I only get 35 (7+7+10+11).


----------



## Stefan (Oct 20, 2014)

Well...

Step 8: Dance for a minute.
Step 26: Sing for a minute.

But you're right, these are optional.


----------



## Dane man (Oct 20, 2014)

Kirjava said:


> Even after you considering all this, will the results be anything more than meaningless? I've always considered something like this too deep and subjective to analyse effectively, but maybe you can convince me otherwise.



Well, the hope is that if a somewhat accurate measure of efficiency can be established, then cubers can look at methods and substeps from a "bang for your buck" perspective. I, for example, know that I want to learn an efficient method, but I also am not going to try to learn a method that costs a ton of time and effort for very little benefit relative to other methods. So for example, my 3Look BLL has an efficiency score of 0.51282 compared to the 2LLL OLL/PLL = 0.33058. While the OLL and PLL is faster (by 5.5 moves on average) and has one less recog step, it also requires 54 more algorithms. So that many more algorithms for that small a benefit makes it clearly less efficient, even though it is much faster. For this reason I have stuck to using 3Look BLL because it requires less effort despite not being as fast.

In simpler terms, it's more a numerical way to roughly quantify how much relative benefit you will get out of the amount of effort you're willing to put in (hence "efficiency"). If you're willing to memorize all the algs in ZB for the benefit of cutting 15 moves out of your solve, then go right ahead. The efficiency will be significantly smaller than that of CFOP, Roux, or ZZ, but it will be faster. If you're not willing to put in that much effort, you might want to look for a method that has a greater efficiency.

While it isn't super useful or even all that necessary, it's still something interesting to think about.


----------



## martinss (Oct 20, 2014)

Dane man said:


> * 2.5*Orientations + 2.5*Permutations
> ----------------------------------------------------------------
> Algorithms + 2(AvgMoves)+(ReplaceAlgs/IntuitiveCubies)*
> 
> ...



I think about something like that for the 3x3x3:

* Orientations / 40 + Permutations / 40
------------------------------------------------------------------------------------------------------------
Algorithms / 100 + AvgMoves / 40 + AvgIntuitiveMoves / ( IntuitiveOrientations + IntuitivePermutations ) *

Numerator is between 0 and 1
Denominator is between 1 and +infinity



For any other cubes :

* ( Orientations / OrientableCubies + Permutations / PermutableCubies ) / 2
----------------------------------------------------------------------------------------------------------------------------------------------------
Algorithms / 100 + (AvgMoves) / (2 x God'sNumber) + (AvgIntuitiveMoves) / (IntuitiveOrientations + IntuitivePermutations) x (Cubies/God'sNumber) *


EDIT :
As Numerator is between 0 and 1 and Denominator is between 1 and +infinity, the result is always between 0 and 1. (full method)
For the very best method : (20/40 + 20/40) / ( 0/100 + 20/40 + 20/(20+20) ) = 1 / 1 = 1
For CFOP : (20/40 + 20/40) / ( 78/100 + 54.8/40 + 0/(0+0) ) = 0.31746031746 ( 0/0 = 1 !?! )
For Cross : (4/40 + 4/40) / ( 0/100 + 6.5/40 + (6.5 / (4+4) ) ) = 0.2 / (0+0.1625+0.8125) = 0.2/0.975 = 0.20512820512
For F2L intutive : (8/40 + 8/40)/(0/100 + 26.8/40 + (26.8/ (8+8) ) ) = 0.17057569296
For OLL : (8/40 + 0/40)/(57/100+9.7/40+0/(0+0)) = 0.11034482758
For PLL : (8/40)/(21/100 +11.8/40+1) = 0.13289036544


----------



## Dane man (Oct 20, 2014)

martinss said:


> I think about something like that for the 3x3x3:
> 
> * Orientations / 40 + Permutations / 40
> ------------------------------------------------------------------------------------------------------------
> ...



That's an interesting way to look at it. Mine is also for the 3x3. Though I see one issue, and that is that the formulas you have presented are limited to measuring the effectiveness (or total benefit) relative to the entire method, rather than the efficiency of a substep alone. For example, as you have shown, God's algortihm is given a near perfect (average moves for God's algorithm is between 17-18) score according to your formula, whereas my formula reduces the score due to the amount of mental effort / work required.

*God's Algorithm (memorized)*
100/(43*10^18 + 17 + 0) = Approx. 0.0000000000000000023256
*God's Algorithm (Intuitively)*
100/(0+17+(43*10^18/20)) = Approx. 0.000000000000000046512

As for your CFOP calculation, you've forgotten to include the intuitive moves (Cross, F2L). But nice work.

Also, are you the same Martinss that is adding the cube states map to the wiki?


----------



## Tao Yu (Oct 20, 2014)

Dane man said:


> Well, the hope is that if a somewhat accurate measure of efficiency can be established,* then cubers can look at methods and substeps from a "bang for your buck" perspective*. I, for example, know that I want to learn an efficient method, but I also am not going to try to learn a method that costs a ton of time and effort for very little benefit relative to other methods. So for example, my 3Look BLL has an efficiency score of 0.51282 compared to the 2LLL OLL/PLL = 0.33058. While the OLL and PLL is faster (by 5.5 moves on average) and has one less recog step, it also requires 54 more algorithms. So that many more algorithms for that small a benefit makes it clearly less efficient, even though it is much faster. For this reason I have stuck to using 3Look BLL because it requires less effort despite not being as fast.
> 
> In simpler terms, it's more a numerical way to roughly quantify how much relative benefit you will get out of the amount of effort you're willing to put in (hence "efficiency"). If you're willing to memorize all the algs in ZB for the benefit of cutting 15 moves out of your solve, then go right ahead. The efficiency will be significantly smaller than that of CFOP, Roux, or ZZ, but it will be faster. If you're not willing to put in that much effort, you might want to look for a method that has a greater efficiency.
> 
> While it isn't super useful or even all that necessary, it's still something interesting to think about.



Let me just say that I don't think many people are going to take "efficiency" into account when choosing a _speedsolving_ method. 

As far as speedsolving is concerned, I think what people are most interested in is a method that:
1. Doesn't hold you back or puts you at a disadvantage to someone using a different method, and 
2. Is feasible to learn. 

If you use BLL, you'll be at a disadvantage, no matter what. Speedsolvers aren't going to see that as bang for your buck no matter how "efficient" it is.


----------



## GuRoux (Oct 20, 2014)

Dane man said:


> My estimate is based on the following. I divide the cross into four parts, each piece being one. Then I estimated the number of algorithms that would be required to put one piece in place. I came up with 5 ((U move) R, (U move) R', (U move) R2, R (U move) F, R (U move) R'), then added one because the U move could be U/U' or U2. Because the cross requires so little mental effort, and can be taught using those algorithms to a small child, I feel the number I came up with is accurate.
> 
> If you have another method of estimation, I would love to see it.



I thought you have to base your estimation on what can bring you to a solution close to the average. For cross, i thought your algs should allow you to finish with an average of 6.5 moves. what about algs like F D' R' and all its variations where the cross piece is misoriented in the cross layer?


----------



## Dane man (Oct 21, 2014)

GuRoux said:


> I thought you have to base your estimation on what can bring you to a solution close to the average. For cross, i thought your algs should allow you to finish with an average of 6.5 moves. what about algs like F D' R' and all its variations where the cross piece is misoriented in the cross layer?



The idea wasn't to measure the algorithms that would replace it while maintaining the same number of moves, but simply to measure the difficulty. The difficulty of something can be measured by the minimum number of algs that need to be used to replace the intuitive act. Hence the reason that 6 is more appropriate than say 30. Doing the cross is immensely easy for this reason.

As for F' D' R', I did include that alg, I just solve my cross on top (In my head at least. It ends up on bottom, but I think of it as the top).



Tao Yu said:


> Let me just say that I don't think many people are going to take "efficiency" into account when choosing a _speedsolving_ method....Speedsolvers aren't going to see that as bang for your buck no matter how "efficient" it is.



Well, as I said at the beginning of the original post, this isn't for evaluating the value of a method, how good it is or anything like that. It's simply a measurement of the efficiency of the method. How much you get out of your efforts. And as you've said, using 3Look BLL isn't as fast as OLL/PLL despite being more efficient. The difference that OLL/PLL makes is that it's 5.5 moves faster, even though it requires 52 more algorithms. It is less _efficient_, but because most other cubers are using OLL/PLL, so are they. They are willing to do the work needed to learn it and get good at it so that they can compete. 

The reason I don't do OLL/PLL is for the same reason that the OLL/PLL people don't do 2Look BLL. 2Look BLL is 2 moves faster than OLL/PLL, but it has 98 algorithms (22 algs more). Why don't people use 2Look BLL? Because even though it's 2 moves faster, it's slightly less efficient than OLL/PLL. They don't get as much bang for their buck. The same thing is true of 3Look BLL. I use it because OLL/PLL is less efficient, despite being 5.5 moves faster. I'm slightly lazier than most cubers.

Why have so few learned the ZB method? Same thing. Over 200 more algs for a 15 move difference. It's less efficient despite being significantly faster, and giving a massive advantage.

So you're correct in one sense, but I must point out why. Cubers _do_ care about efficiency, but they _don't_ care about efficiency _as much_ as they care about being at least equally footed with competing cubers.


----------



## martinss (Oct 21, 2014)

Dane man said:


> That's an interesting way to look at it. Mine is also for the 3x3.


The second formula was just to make a formula for any cubes. 3x3x3 should just be a specific situation. 



Dane man said:


> Though I see one issue, and that is that the formulas you have presented are limited to measuring the effectiveness (or total benefit) relative to the entire method, rather than the efficiency of a substep alone.


That's not true, I was talking about full method because the denominator would be between 1 and infinity, so the result for a full method is always between 0 and 1 (but it doesn't really work)...



Dane man said:


> For example, as you have shown, God's algortihm is given a near perfect (average moves for God's algorithm is between 17-18) score according to your formula, whereas my formula reduces the score due to the amount of mental effort / work required.
> 
> *God's Algorithm (memorized)*
> 100/(43*10^18 + 17 + 0) = Approx. 0.0000000000000000023256
> ...



Well, God's Algorithm (memorized) should have a very low score due to mental effort. But for God's Algorithm (Intuitively), the score shall be 1. Doesn't Intuitively mean only logical (so no mental effort as cross or F2L) ? Your "ReplaceAlgs" is too hard too find !



Dane man said:


> As for your CFOP calculation, you've forgotten to include the intuitive moves (Cross, F2L). But nice work.


That's true !



Dane man said:


> Also, are you the same Martinss that is adding the cube states map to the wiki?


I'm trying to add each "cube state" on the wikifor more interactivity, exploration... I did all of them for the CFOP. But it takes time, which one I haven't.



So what about 

*( Orientations / OrientableCubies + Permutations / PermutableCubies ) / 2
----------------------------------------------------------------------------------------------------------------------------------------------------
sqrt (Algorithms) / 10 + (AvgMoves) / (2 x BestAvgMoves) + (AvgIntuitiveMoves) / (IntuitiveOrientations + IntuitivePermutations) x (Cubies/BestAvgMoves) *

?

(with (AvgIntuitiveMoves) / (IntuitiveOrientations + IntuitivePermutations) x (Cubies/BestAvgMoves) = 0.5 if nothing is intuitive

For the 3x3x3, it gives that :

*Orientations / 40 + Permutations / 40
------------------------------------------------------------------------------------------------------------
sqrt (Algorithms) / 10 + AvgMoves / 35.4 + AvgIntuitiveMoves / [0.885 ( IntuitiveOrientations + IntuitivePermutations )]*



EDIT :

I added sqrt(Algorithms) because going from 1 to 2 algorithm is harder than going from 20 to 21

Numerator is from 0 to 1. (0 for a nothing step, 1 for a full method)
Denominator is from 1 to +infinity for any full method, or less for a step.
Result is between 0 and 1 (at least for any full method) or more for a very good step.

God's Algorithm (memorized) :
1 / ( (sqrt(43*10^18))/10 + 17.7/35.4 + 0.5 ) = 1.5249857e-9

God's Algorithm (intuitively) :
1 / ( (sqrt(0))/10 + 17.7/35.4 + (17.7/(0.885*40)) ) = 1

CFOP :
1 / ( (sqrt(78))/10 + 54.8/35.4 + (33.3/(0.885*24)) ) = 0.25006280979

CROSS :
(4/40+4/40) / ( 0 + 6.5/35.4 + (2.5/(0.885*8)) ) = 0.37263157894

F2L :
(8/40+8/40) / ( 0 + 26.8/35.4 + (26.8/(0.885*16)) ) = 0.15095948827

OLL : 
(8/40 ) / ( sqrt(57)/10 + 9.7/35.4 +0.5 ) = 0.13080489708

PLL :
(8/40) / ( sqrt(21)/10 +11.8/35.4 +0.5) = 0.15484779241


----------



## Lucas Garron (Oct 21, 2014)

Man, you're making CLS look bad. 

ELS: 2.5*6/(21 + 2*6.5) = 0.44
CLS: 2.5*6/(104 + 2*9.5) = 0.12
(Using stats from http://cube.garron.us/MGLS/)

Of course, that's counting mirrors as different, and using near-optimal movecount for CLS.
This is misrepresentative, because you want to use <K(=R U R'), U>-gen algs, which are both easier to memorize, and consistently easy to learn and execute.

I agree with Kirjava. Anything is going to be inexact here, but this sounds more sensible than nothing.
Have you tried proving basic properties of the formula? For example, under certain conditions it should always be sensible to split a large step into smaller ones.

Anyone want to calculate efficiencies for Thistlethwaite or Kociemba?
Kociemba is tuned for a different kind of cost ("# of algs" doesn't make sense, but the movecount for each case is important.)



Stefan said:


> I haven't read it all, but your Turn-Check-If-Solved method just gave me an idea for a simple method. Let A1 be an 8-cycle of corners, A2 be a 7-cycle of the corners except DFR, etc. I think you'll get the idea.



Congratulations, you have invented strong generating sets!
(These kinds of methods make it easy to prove lower bounds on the number of states of a puzzle after combinatorics + parity gives you an upper bound.)


----------



## Petro Leum (Oct 21, 2014)

i has similar ideas, but i was too lazy to write out something like this. great job!

how do you measure recognition complexity within efficiency? i think that's a very important factor.


----------



## Dane man (Oct 21, 2014)

Lucas Garron said:


> Man, you're making CLS look bad.
> 
> ELS: 2.5*6/(21 + 2*6.5) = 0.44
> CLS: 2.5*6/(104 + 2*9.5) = 0.12
> ...



The algorithms may easier to memorize, but there is also recognition of the 104 states and the recall needed for those states. This is the only numerical way I have found to measure recognition requirements and include them into the formula's efficiency measurement, and so in all fairness, CLS is correctly calculated (at least in the denominator) because it requires not just memorization of the algs, but recognition and recollection of 104 different states.

Now this doesn't mean that it's bad, it just means it requires more mental effort than usual to implement.

_Note: You've put 6 instead of 5 for the number of orientations accomplished in the numerators of your calculations. Is that a mistake?_



Lucas Garron said:


> Have you tried proving basic properties of the formula? For example, under certain conditions it should always be sensible to split a large step into smaller ones.



What do you mean by that?



Lucas Garron said:


> Anyone want to calculate efficiencies for Thistlethwaite or Kociemba?
> Kociemba is tuned for a different kind of cost ("# of algs" doesn't make sense, but the movecount for each case is important.)



Thistlethwaite and Kociemba would be very difficult to calculate. The number of moves won't have so much an effect on it's score so much as the estimated replacement algs, which would be high. Both Thistlethwaite and Kociemba would naturally receive rather low efficiency scores because of the difficulty of human application.



Petro Leum said:


> How do you measure recognition complexity within efficiency? i think that's a very important factor.



Unfortunately, I have not found any numerical way to represent the difficulty of recognition other than the number of algorithms required, which is representative of the number of states that will need to be recognized, and in some sense, the difficulty of that recognition. If there were a numerical way to measure the difficulty of recognition, I would gladly include it.


----------



## Dane man (Oct 21, 2014)

martinss said:


> Well, God's Algorithm (memorized) should have a very low score due to mental effort. But for God's Algorithm (Intuitively), the score shall be 1. Doesn't Intuitively mean only logical (so no mental effort as cross or F2L) ? Your "ReplaceAlgs" is too hard too find !


When it comes to talking about speedcubing and speedcubing methods, the word "intuitive" usually refers to a step that is thought through instead of having memorized algorithms. The difficulty of thinking through God's algorithm is only slightly less than the difficulty of memorizing 43 quintillion algorithms, hence the still low score.

And yes, replacement algs is a very inaccurate and difficult to measure number, especially when there is a method that uses intuitive steps without there being any method of generating algorithms to replace them. Two examples have been given by Lucas Garron: Kociemba and Thistlethwaite.

As for your new formula, the new results are starting to look much more accurate (super subjective term) when it comes to individual substep measurement. Good job!


----------



## martinss (Oct 21, 2014)

*(learning) Difficulty of an algorithm* = Number of moves in [WIKI]Logical Turn Metric[/WIKI] ( with [A:B] = ABA', length=2 and with [A,B] = ABA'B', length=2 )
Example : For the main PLL-T algorithm, difficulty=9 LTM moves : R U R' U' R' F R2 U' R' U' R U R' F' = [R, U] [F: [F', R'] [R U' R': (U')] ] 
Divide by 2 if it's a mirror/invert of an other algorithm (of the same substep).

*Slowness of an algorithm* = ( Number of moves in [WIKI]HTM[/WIKI] ) * ( Number of regrips )
Example : For the main PLL-T algorithm, Slowness=14 HTM moves . regrip . look : 14*1*1=14

*Efficacity of an algorithm* = 1 / ( Slowness * difficulty)
Examples: For the main PLL-T algorithm, Efficacity= 1/126

*Mobility of a substep * = ( Number of free permutations ) + ( Number of free orientations ) 
Examples : For the PLL algorithms, Mobility = 0 change. For the OLL algorithms , Mobility = 8 changes. (8 permutations)

*Progression of a substep * = Permutations/permutable cubies + Orientations/orientable cubies
Example : For the PLL algorithms, progression = 8/20+0/20 = 8/40=1/5

*Efficacity of a substep* = ( progression/mobility ) * Σ P(alg)*Efficacity(alg)


----------



## Dane man (Oct 22, 2014)

martinss said:


> *(learning) Difficulty of an algorithm* = Number of moves in [WIKI]Logical Turn Metric[/WIKI] ( with [A:B] = ABA', length=2 and with [A,B] = ABA'B', length=2 )
> Example : For the main PLL-T algorithm, difficulty=9 LTM moves : R U R' U' R' F R2 U' R' U' R U R' F' = [R, U] [F: [F', R'] [R U' R': (U')] ]
> Divide by 2 if it's a mirror/invert of an other algorithm (of the same substep).
> 
> ...



Wow, I like these. They all follow a very logical progression, and are actually quite accurate when it comes to measuring individual parts. Good work!

I have one question though. What is P(alg)? and is it _( progression/mobility ) * Σ P(alg) * Efficacity(alg)_ or _( progression/mobility ) * Σ ( P(alg) * Efficacity(alg) )_? The lack of space around the second multiplier makes me wonder.


----------



## martinss (Oct 22, 2014)

Dane man said:


> What is P(alg)? and is it _( progression/mobility ) * Σ P(alg) * Efficacity(alg)_ or _( progression/mobility ) * Σ ( P(alg) * Efficacity(alg) )_? The lack of space around the second multiplier makes me wonder.


 P(alg) is the probability of the case knowing the substep. So for pll efficatcity :
( progression/mobility ) * ( 1/18 * efficacity(PLLAa) + 1/18 * efficacity(PLLAb) + 1/72 * efficacity(PLLH) + ... )


----------



## Dane man (Oct 22, 2014)

martinss said:


> P(alg) is the probability of the case knowing the substep. So for pll efficatcity :
> ( progression/mobility ) * ( 1/18 * efficacity(PLLAa) + 1/18 * efficacity(PLLAb) + 1/72 * efficacity(PLLH) + ... )



Oh okay, so it's _( progression/mobility ) * Σ ( Probability(alg) * Efficacity(alg) )_. Nicely done.


----------



## mark49152 (Oct 22, 2014)

Which algs do you choose for calculating the efficacy of, say, PLL? Different people will have different values so it doesn't seem right to say it's a measure of the substep. It's more a measure of the alg set.


----------



## martinss (Oct 23, 2014)

Dane man said:


> Oh okay, so it's _( progression/mobility ) * Σ ( Probability(alg) * Efficacity(alg) )_. Nicely done.



I didn't think about it but it does not work. Two algs should be less efficient than one... i'll try to think about an other way...



mark49152 said:


> Which algs do you choose for calculating the efficacy of, say, PLL? Different people will have different values so it doesn't seem right to say it's a measure of the substep. It's more a measure of the alg set.



I think the goal of the thread is to purpose these algs... You're right when you say measuring a substep doesn't mean measuring the alg set 'cause sometimes there are intuitive moves (as in F2L), sometimes other things...


----------



## martinss (Oct 23, 2014)

martinss said:


> I didn't think about it but it does not work. Two algs should be less efficient than one... i'll try to think about an other way...



*(learning) Difficulty of an algorithm* = Number of moves in [WIKI]Logical Turn Metric[/WIKI] ( with [A:B] = ABA', length=2 and with [A,B] = ABA'B', length=2 )
Example : For the main PLL-T algorithm, difficulty=9 LTM moves : R U R' U' R' F R2 U' R' U' R U R' F' = [R, U] [F: [F', R'] [R U' R': (U')] ] 
_Difficulty belongs to [1;+infinity[_

*Slowness of an algorithm* = ( Number of moves in [WIKI]HTM[/WIKI] ) * ( Number of regrips )
Example : For the main PLL-T algorithm, Slowness=14 HTM moves . regrip : 14*1=14
_Slowness belongs to [1;+infinity[_

*Efficacity of an algorithm* = 1 / ( Slowness * Difficulty)
Examples: For the main PLL-T algorithm, Efficacity= 1/126
_Efficacity belongs to ]0;1]_


*(learning) Difficulty of algs of a step* = Σ Difficulty(Alg)
_Difficulty belongs to [1;+infinity[_

*Slowness of algs of a step* = Σ (Probability(alg, knowing the substep)*Slowness(alg))
_Slowness belongs to [1;+infinity[_

*Efficacity of algs of a step* = 1 / (slowness*difficulty)
_Efficacity belongs to ]0;1]_



*Mobility of a substep * = ( Number of free permutations ) + ( Number of free orientations ) 
Examples : For the PLL algorithms, Mobility = 0 change. For the OLL algorithms , Mobility = 8 changes. (8 permutations)
_Mobility belongs to [0;40]_

*Progression of a substep * = (Permutations/permutable cubies + Orientations/orientable cubies) /2
Example : For the PLL algorithms, progression = 8/40+0/40 = 8/40=1/5
_Progression belongs to ]0;1]_

*Efficacity of a substep* = ( progression/(1+mobility) ) * (Efficacity of algs of the substep)
_Efficacity belongs to ]0;1]_


----------



## brododragon (Feb 1, 2020)

What about something like zzll, where you do not completely permute, but making sure pieces go in a certain group of spots?


----------

