# What is the minimum number of algorithms needed for 2 look last layer?



## 10461394944000 (May 12, 2014)

what is the minumum number of algorithms needed such that for any last layer case, you can apply 2 algorithms and solve the cube?

it's probably some really silly thing that would be silly to recognize but it seems like an interesting question.

what about 3 look?

I can't really think of a way of finding such a method so i'll let the puzzle theory people find it instead


----------



## Robert-Y (May 13, 2014)

I'm going to make a guess:
Step 1: OCLL+EOLL or OCLL+flip all edges incorrectly
Step 2: PLL or PLL with 4 edges incorrectly flipped

Why? Well I thought about OLL and PLL. I think if we try to halve OLL and double PLL, we will have (slightly) less cases to learn...


----------



## Kirjava (May 13, 2014)

Robert-Y said:


> I'm going to make a guess



Didn't we work out this exact thing when we were doing that SuneOLL/OLLCP hax stuff?


----------



## Robert-Y (May 13, 2014)

I don't remember sorry :/

I think my suggestion actually ties with CLL/ELL: 42 + 29 = 71

Step 1: OCLL+EOLL or OCLL+flip all edges incorrectly
Step 2: PLL or PLL with 4 edges incorrectly flipped

28 + 43 = 71

However I'm not completely certain about number of algs required for step 1


----------



## Ollie (May 13, 2014)

oh, I thought Ben was being sarcastic, I didn't get the question initially


----------



## Lucas Garron (May 13, 2014)

10461394944000 said:


> what is the minumum number of algorithms needed such that for any last layer case, you can apply 2 algorithms and solve the cube?



Small, but important detail: look ≠ alg
Any BLD solve is 1-look.

So, with 2 looks, the answer is of course still 1 alg or two algs.

I'm just going to use "step" here. For advanced methods, a step usually has 1 look and one "alg". That would also be the case for the straightforward group theory interpretation of this problem.



10461394944000 said:


> it's probably some really silly thing that would be silly to recognize but it seems like an interesting question.
> 
> what about 3 look?
> 
> I can't really think of a way of finding such a method so i'll let the puzzle theory people find it instead



It *is* an interesting question. We have several "common" ways of splitting up LL into two successive sets of cases (steps_.

AUF + OLL + AUF + PLL gives us roughly 58*4*22*4 = 20416 sequences. There are 15552 LL cases (ignoring AUF) so we're overshooting a bit.
CLL/ELL gives us 4*43*4*30 = 20640, which is also a bit much.
Robert's suggestions gives us 4*29*4*43 = 19952, which is not much better.

The theoretical minimum is around \( \sqrt{1552}/4 = 31.1769 \) cases per steps.
So, it might be possible to find a set of 31 + 32 cases that can solve LL using a 2-step system. This gives us 4*31*4*32 = 15872 > 15552 possible cases, requiring as little as 63 cases to memorize.

You could also use some tricks to reuse the same "alg" for multiple cases, which is arguably in the spirit of the original question.


It's possible that up to one case from each set could be the "solved" case, as with OLL/PLL.
In general, it might be possible to share a few cases between some sets. (This usually doesn't happen for speedsolving methods, but it might be possible for this.)
If we count mirrors as one alg, that always helps. (I think most people would exclude inverses, though.)
It might be possible to use the same alg for multiple cases, using rotations. For example, (R U R' U R U2 R')(L' U' L U' L' U2 L) can be used sideways as z'(R U R' U R U2 R')(L' U' L U' L' U2 L)z. Because the algs basically need to be restricted to LL (else we can't use an AUF between two of them), I can't think of any other examples, though. We could also allow fancier conjugates, but then we start to blur the definition of an alg (If you know the two parts of Y-perm for OLL, does the Y count as a separate alg?). In general, we could reuse algs in other algs. However, this allows us to solve many cases using 1 look, by applying a BLD method. To keep the question interesting, I think that algs used to construct other algs should not allow us to reduce the count of algs. If you have to remember a variant/conjugation/combination of algs, that counts as a new alg.

So, the theoretical minimum for an "obvious" way to do this is 63 cases, with possible hacks to reduce the amount of algs you need to memorize. I'm willing to bet there's something between 63 and 71, although it's not likely to be pretty.

The analogous answer for 3 steps is \( \sqrt[3]{1552}/4 = 31.1769 \), which allows us to get away with as few as 19 cases in theory. For 4 steps, 12 cases. That sounds like a fun puzzle.


----------



## GG (May 13, 2014)

Didn't read that giant thing above _but_ I can contribute that CFCE uses less algs than regular CFOP for the last layer.
Sorry if i'm a nuisance!! xD


----------



## Lucas Garron (May 13, 2014)

Lucas Garron said:


> So, the theoretical minimum for an "obvious" way to do this is 63 cases, with possible hacks to reduce the amount of algs you need to memorize. I'm willing to bet there's something between 63 and 71, although it's not likely to be pretty.



I purposely skimmed over "skips" in the previous post.
If you allow skips, then it makes sense to take out the solved cases. Given two sets of algs – call them A and B – the cases you can solve are


(AUF + alg from A) or nothing, followed by
(AUF + alg from B) or nothing

This gives us \( (4|A| + 1)(4|B| + 1) \).

Now, \( (4\cdot 31 + 1)(4\cdot 31 + 1) = 15625 \)
So, if we insist that a skip of either step is for free, we can go down to a (theoretical) minimum of 31+31 = 62 cases that need algs.


----------



## Athefre (May 13, 2014)

OCLL + Separate edges (correct or opposite swapped):

Sune: 6 cases (Swap F+L, F+R, R+B, B+L, Rotate all 90 degrees, and Correct)
AntiSune: 6 cases
TripleSune: 6 cases
T: 6 cases
Headlights: 6 cases
Pi: 6 cases
H: 4 cases (Swap F+L, F+R, Rotate all 90 degrees, Correct)

CPLL + EOLL + Final Separation:

Adjacent Corner Swap: 16 cases (F+L Misoriented, F+R, R+B, B+L, F+B, L+R, All, None, and multiply all of these by two to account for the state of separation)
Opposite Corner Swap: 10 cases (F+L Misoriented, F+R, F+B, All, None, Multiply by two)

That's 66. This is pretty much NMLL for CFOP. What did I miss? Edit: Duh, when CPLL is complete.


----------



## Carrot (May 13, 2014)

just a thought: U' is solved by exactly what 2 algs? the question didn't mention that you were allowed to insert random moves. 
(yes, he probably intended us to be able to insert U turns randomly between algs, but he didn't state it.)


----------



## 10461394944000 (May 13, 2014)

Carrot said:


> just a thought: U' is solved by exactly what 2 algs? the question didn't mention that you were allowed to insert random moves.
> (yes, he probably intended us to be able to insert U turns randomly between algs, but he didn't state it.)



yes AUF is fine


----------



## cmhardw (May 13, 2014)

10461394944000 said:


> yes AUF is fine



Would you allow setup turns?

For example:
M + J perm + M' ?

My thought is that if we're trying to minimize the number of memorized algorithms, then perhaps allowing a setup turn not greater than 1 turn in length before the start of each step could reduce the total number of algorithms?


----------



## Antonie faz fan (May 13, 2014)

28
edge control( 7 oll algs) and all pll's


----------



## Filipe Teixeira (May 13, 2014)

Antonie faz fan said:


> 28
> edge control( 7 oll algs) and all pll's



that is not 2 look last layer.


----------



## 10461394944000 (May 13, 2014)

Antonie faz fan said:


> 28
> edge control( 7 oll algs) and all pll's



edge control+ocll+pll is 3 look last slot+last layer, not 2 look last layer



cmhardw said:


> Would you allow setup turns?
> 
> For example:
> M + J perm + M' ?
> ...



I guess it would be interesting to see how much lower the number would be with setup moves, but I was originally thinking no setups.


----------



## Antonie faz fan (May 13, 2014)

filipemtx said:


> that is not 2 look last layer.



why not?

edge control while doing f2l the you have corners to solve for oll (first look) and then PLL ( for second look)


----------



## 10461394944000 (May 13, 2014)

Antonie faz fan said:


> why not?



last slot is not part of the last layer

also edge control+ocll+pll is 3 algs


----------



## uberCuber (May 13, 2014)

Antonie faz fan said:


> 28
> edge control( 7 oll algs) and all pll's



Okay then how about let's just use full OLS, and then we can do 2-look last layer with only the 6 algs required for 2-look PLL!!


----------



## Jakube (May 13, 2014)

uberCuber said:


> Okay then how about let's just use full OLS, and then we can do 2-look last layer with only the 6 algs required for 2-look PLL!!



Don't be stupit. 



10461394944000 said:


> last slot is not part of the last layer



Even OLS has already 216 algs, not counting forming the pair.


----------



## cubernya (May 13, 2014)

10461394944000 said:


> last slot is not part of the last layer
> 
> also edge control+ocll+pll is 3 algs



What about partial edge control (sexy or sledge)? That would eliminate all dot OLLs


----------



## 10461394944000 (May 13, 2014)

theZcuber said:


> What about partial edge control (sexy or sledge)? That would eliminate all dot OLLs





10461394944000 said:


> what is the minumum number of algorithms needed such that *for any last layer case*, you can apply 2 algorithms and solve the cube?



no because edge control is not LL


----------



## Hypocrism (May 13, 2014)

You could even 1-look with a beginner's method if you predict where the irrelevant pieces go after each algorithm. So, for a 2-look that is practically doable, you just need 4

1st look:
FURU'R'F + Sune (you predict the sune case before FURU'R'F' algorithm)

2nd look:
A perm + U perm (predict the edge case before A-perm combination)

And I'm sure that you could do it with fewer in a similar way!

For a speedsolving method, obviously this isn't a practical choice, but it's legitimate if you don't assume 1 case -> 1 algorithm.


----------



## Filipe Teixeira (May 13, 2014)

until now the winner is CLL+ELL with 71 cases


----------



## Tempus (May 13, 2014)

10461394944000 said:


> what is the minumum number of algorithms needed such that for any last layer case, you can apply 2 algorithms and solve the cube?
> 
> it's probably some really silly thing that would be silly to recognize but it seems like an interesting question.
> 
> ...


I think this is a fascinating quesiton. So fascinating, in fact, that I already thought of it before joining this forum. A few months ago, when I started getting into cubing again, I wrote a crude program to try to answer questions such as this using a Monte Carlo algorithm. The problem is that it requires a vast amount of processing time. Even running on eight cores, it takes a long time to get any kind of result, and that result will never be perfect because it's using a Monte Carlo approach. Still, I have hauled the code out of mothballs and am trying to at least establish an upper bound. It will take time, however...


----------



## qqwref (May 14, 2014)

I really like the idea of having some shared cases between the two sets - Lucas suggests maybe a few, but I'm thinking of having only one bigger set of algs, so that we can solve the LL by doing some two of them in sequence. It would be like a crazy version of the "two sunes" beginner method to OCLL. Unfortunately there is no obvious way to construct such a set of algs.

Using the same alg for multiple cases is also an interesting idea, although pretty much only 2-corner twists can be used if we restrict ourselves to the last layer. (It does save an OLL, though.) I wonder if there is some kind of other geometry we could get the pieces into on the first step, which would have more symmetries than the LL. I would say that any rotations between steps are allowed, BTW.



Tempus said:


> I think this is a fascinating quesiton. So fascinating, in fact, that I already thought of it before joining this forum. A few months ago, when I started getting into cubing again, I wrote a crude program to try to answer questions such as this using a Monte Carlo algorithm. The problem is that it requires a vast amount of processing time. Even running on eight cores, it takes a long time to get any kind of result, and that result will never be perfect because it's using a Monte Carlo approach. Still, I have hauled the code out of mothballs and am trying to at least establish an upper bound. It will take time, however...


Ooh, cool. Do you remember anything more specific about how your program worked?



Antonie faz fan said:


> edge control while doing f2l the you have corners to solve for oll (first look) and then PLL ( for second look)





theZcuber said:


> What about partial edge control (sexy or sledge)? That would eliminate all dot OLLs





Hypocrism said:


> You could even 1-look with a beginner's method if you predict where the irrelevant pieces go after each algorithm. So, for a 2-look that is practically doable, you just need 4





filipemtx said:


> until now the winner is CLL+ELL with 71 cases


Please! Read the thread!!!


----------



## Tempus (May 14, 2014)

qqwref said:


> Ooh, cool. Do you remember anything more specific about how your program worked?


Given n and m, it uses a Monte Carlo approach to generate a set of n algorithms which is sufficient to solve the last layer in m looks. It is currently processing, and I will post some results when I have them, although I do need to tidy up the output routines to make it produce something presentable. This program was written only for my own use, and as a result its user interface is _very_ rough around the edges.


----------



## cubernya (May 14, 2014)

Just thought of this (technically correct) no-look method for any LL: Devil's Algorithm

Obviously this is not what is wanted, but I just want to throw that out there.


----------



## uberCuber (May 14, 2014)

theZcuber said:


> Just thought of this (technically correct) no-look method for any LL: Devil's Algorithm
> 
> Obviously this is not what is wanted, but I just want to throw that out there.



It's not really no-look though. If you don't look, how can you know when the cube is solved? In fact, it could be up to a 43-quintillion-look method, since the process is:
Perform one move
Look to see if the cube is solved
If it's not solved, repeat.


----------



## IRNjuggle28 (May 14, 2014)

People should have to apply to post in the puzzle theory forum IMO. Spam elsewhere isn't a big deal, but this is getting ridiculous. If people don't know what they're talking about, they should shut up. That's why I virtually never post in here. I don't know much about puzzle theory. But at least I know that I don't know. Come on.

All last layer cases means all last layer cases. Proposing a pre-LL substep that eliminates LL cases is exactly what Ben didn't want. He is specifically asking about LL uninfluenced by F2L. You guys are coming up with ideas that are good for speedcubing. That's not what this is. It's a math question.


----------



## Kirjava (May 14, 2014)

61

Was not optimised for low alg count, can be done.


----------



## kinch2002 (May 14, 2014)

I would have thought that you can get a lot lower than the theoretical 62 that Lucas proposed, by having overlapping alg sets for the 2 steps. I don't believe that anything was specified to say that the first step must reduce the cube to an easily-defined subset of states i.e. we can use an approach like Kirjava's LL stuff. Therefore a new theoretical lower limit would be 31 algs that are used as both the first and second step?


----------



## Kirjava (May 14, 2014)

I think you could brute force a much better answer with my approach with some code and a lot of processing time.


----------



## cmhardw (May 14, 2014)

kinch2002 said:


> I would have thought that you can get a lot lower than the theoretical 62 that Lucas proposed, by having overlapping alg sets for the 2 steps. I don't believe that anything was specified to say that the first step must reduce the cube to an easily-defined subset of states i.e. we can use an approach like Kirjava's LL stuff. Therefore a new theoretical lower limit would be 31 algs that are used as both the first and second step?



I find the argument convincing that an alg could be used in either phase I or phase II or this proposed LL approach. However, how do you jump from 62 down to 31? It's not intuitively obvious to me that all 31 algs could be _dual purpose_ algs like this. Is this a conjecture on your part, or do you have some insight that you can give to help me to see it too?

@Kir - Your results in the post you link really changed my perspective on this topic. I thought Lucas' argument was convincing that 62 was the minimum. The idea of algorithms serving both phase I and phase II functions makes sense as to how to optimize this LL method to have fewer than 62 algorithms.


----------



## irontwig (May 14, 2014)

cmhardw said:


> I find the argument convincing that an alg could be used in either phase I or phase II or this proposed LL approach. However, how do you jump from 62 down to 31? It's not intuitively obvious to me that all 31 algs could be _dual purpose_ algs like this. Is this a conjecture on your part, or do you have some insight that you can give to help me to see it too?



Well, think about how you would solve the reverse of some case...


----------



## kinch2002 (May 14, 2014)

cmhardw said:


> I find the argument convincing that an alg could be used in either phase I or phase II or this proposed LL approach. However, how do you jump from 62 down to 31? It's not intuitively obvious to me that all 31 algs could be _dual purpose_ algs like this. Is this a conjecture on your part, or do you have some insight that you can give to help me to see it too?
> 
> @Kir - Your results in the post you link really changed my perspective on this topic. I thought Lucas' argument was convincing that 62 was the minimum. The idea of algorithms serving both phase I and phase II functions makes sense as to how to optimize this LL method to have fewer than 62 algorithms.



I meant that 31 is a lower bound for the answer, not that 31 is definitely possible. You'd need at least 31 algs otherwise you can't cover all LL cases - using Lucas' calculation of (4*31 + 1)(4*31 + 1) = 15625.

I'm now having second thoughts on the accuracy of this - need to think about the whole AUF/rotation thing for a bit longer. Also, the use of skips in the calculation, when your sets of algorithms overlap

EDIT: I now think 32 is a lower bound because assuming that we now have one pool of n algorithms, we can only reach 4*n states with 1 algorithm and 4 states with 0 algorithms, so we require (4*n)(4*n)+4*n+4 >= 15552, simplified to 16n^2+4n-15548 = 0. This gives the solution n=31.05, thus 32 algorithms are required.


----------



## Tempus (May 14, 2014)

One pass of my program takes several hours to complete. It was able to arrive at a solution for n=67 in one pass. It's done about 4 passes now at n=66 and the best it's managed so far is to cover 62200 of the 62208 cases. As you can guess from that number, it does not take any symmetries into account, however it does make allowances for AUF before, between, and after algorithms. It considers neither mirrors nor inverses to be equivalent, so the 67 algorithms might contain mirrors and/or inverses, except that it's formulated to use F, R, and U moves exclusively, so identifying mirrors a glance might be tricky. More data to come after further processing...


----------



## cmhardw (May 14, 2014)

irontwig said:


> Well, think about how you would solve the reverse of some case...



Do I understand correctly: The case count reduces by half since every alg and it's inverse solve a different case?

Just to clarify: Are we counting the inverse of an algorithm in our list as a different algorithm?



kinch2002 said:


> I meant that 31 is a lower bound for the answer, not that 31 is definitely possible. You'd need at least 31 algs otherwise you can't cover all LL cases - using Lucas' calculation of (4*31 + 1)(4*31 + 1) = 15625.
> 
> I'm now having second thoughts on the accuracy of this - need to think about the whole AUF/rotation thing for a bit longer. Also, the use of skips in the calculation, when your sets of algorithms overlap
> 
> EDIT: I now think 32 is a lower bound because assuming that we now have one pool of n algorithms, we can only reach 4*n states with 1 algorithm and 4 states with 0 algorithms, so we require (4*n)(4*n)+4*n+4 >= 15552, simplified to 16n^2+4n-15548 = 0. This gives the solution n=31.05, thus 32 algorithms are required.



I misread your first post. It is clear to me now that you meant this (now updated) number to be a lower bound.


----------



## cuBerBruce (May 14, 2014)

kinch2002 said:


> I meant that 31 is a lower bound for the answer, not that 31 is definitely possible. You'd need at least 31 algs otherwise you can't cover all LL cases - using Lucas' calculation of (4*31 + 1)(4*31 + 1) = 15625.
> 
> I'm now having second thoughts on the accuracy of this - need to think about the whole AUF/rotation thing for a bit longer. Also, the use of skips in the calculation, when your sets of algorithms overlap
> 
> EDIT: I now think 32 is a lower bound because assuming that we now have one pool of n algorithms, we can only reach 4*n states with 1 algorithm and 4 states with 0 algorithms, so we require (4*n)(4*n)+4*n+4 >= 15552, simplified to 16n^2+4n-15548 = 0. This gives the solution n=31.05, thus 32 algorithms are required.



I agree that with 32 being a lower bound.

Initial state: 1 state
AUF + 1 alg applied: 4*31 = 124 states
AUF + 1st alg + AUF + 2nd alg: 124*4*31 = 15376 states
Max total reached states: 1 + 124 + 15376 = 15501 < 15552
So 31 not sufficient:
For 32, I get 1 + 128 + 16384 = 16513 > 15552; so it seems 32 could work.


----------



## cmhardw (May 14, 2014)

cuBerBruce said:


> I agree that with 32 being a lower bound.
> 
> Initial state: 1 state
> AUF + 1 alg applied: 4*31 = 124 states
> ...



I like this calculation, that's a neat way to verify 32 as a lower bound. Also, this confirms Daniel's calculation. I think a 32 alg lower limit is pretty clear


----------



## kinch2002 (May 14, 2014)

cuBerBruce said:


> I agree that with 32 being a lower bound.
> 
> Initial state: 1 state
> AUF + 1 alg applied: 4*31 = 124 states
> ...



I think you forgot AUF and no algs (3 states). That would give you the same calculation as mine: 4n*4n+4n+4

Of course no difference to the result


----------



## kinch2002 (May 14, 2014)

By the way, using the same logic, 7 is a lower bound for 3 algs. I wonder whether this is even close to the actual minimum - pretty cool if it is that low!


----------



## XTowncuber (May 14, 2014)

Kirjava said:


> 61
> 
> Was not optimised for low alg count, can be done.





Kirjava said:


> No, you're right. It's actually unsolvable with this system.
> 
> There are around 10 cases like this. I'm thinking of either just generating algs for them for 1LLL when it comes up or doing some other thing (I have a few ideas). The probability of each appearing is 1/~4600, so not a huge deal at the moment.


Did you end up fixing this problem then? I just read through that thread, and I don't remember you saying that you dealt with those bad cases, so just wondering.

Glad you brought that thread up again, it's fun to read and I was too much of a nub to follow it when it was active.


----------



## cuBerBruce (May 14, 2014)

kinch2002 said:


> I think you forgot AUF and no algs (3 states). That would give you the same calculation as mine: 4n*4n+4n+4
> 
> Of course no difference to the result



No, if you count AUF as reaching 3 new states, then you must consider there are 62208 states in all.


----------



## Tempus (May 14, 2014)

cuBerBruce said:


> I agree that with 32 being a lower bound.
> 
> Initial state: 1 state
> AUF + 1 alg applied: 4*31 = 124 states
> ...


I see this as analogous to looking at the question of how many 1 inch discs you would need to fully cover a 5-inch disc and deciding that the lower bound is 25 because 25 1-inch discs together have the same surface area as 1 5-inch disc. In either case, common sense tells you that it is obviously impossible to satisfy the condition with a number of discs/algorithms anywhere _near_ that low, but rigorously codifying exactly _why_ is rather tricky, as is establishing a more realistic lower bound.

Meanwhile, I'm approaching it from the other end, trying to whittle down the upper bound I've already set at 67.


----------



## Kirjava (May 14, 2014)

XTowncuber said:


> Did you end up fixing this problem then? I just read through that thread, and I don't remember you saying that you dealt with those bad cases, so just wondering.
> 
> Glad you brought that thread up again, it's fun to read and I was too much of a nub to follow it when it was active.



61 is taking into account the x number of cases you'd need extra algs for. It would be 48 without.

I actually forgot that you could just invert the algs that were extra, so that would only require 55 because each has a pair.


----------



## Christopher Mowla (May 14, 2014)

It's also interesting to note that it's possible on the 4x4x4 last layer, at least, that you can have algs which, if you just invert some turns, creates an alternate case. For example, [r2 B2 U' l2 U: l2] and [r2 B2 U l2 U': l2]. (There are several other instances when this has happened).

Thus, it might be possible to create a list of algs (even though they might be longer than optimal) which are closely related as these are.

The result would be, of course, a list which has the fewest number of "base" algorithms (not using setup moves). If such a thing is possible, then the resulting set of such algs would be essentially smaller than the actual calculated/computed minimum number of algorithms.


----------



## kinch2002 (May 15, 2014)

cuBerBruce said:


> No, if you count AUF as reaching 3 new states, then you must consider there are 62208 states in all.


Oops, yes you're right. Thank you.


----------



## Lucas Garron (May 15, 2014)

Yeah, if you want to draw from the same set, 32 sounds like a good lower bound. I'd love to see someone do this for PLL, to see if the numbers are more analogous to 62 or 32. I have no intuition for this.

The search space would almost certainly need to be restricted to PLLs, so this wouldn't be *that* hard.



cmowla said:


> Thus, it might be possible to create a list of algs (even though they might be longer than optimal) which are closely related as these are.



You could almost certainly make algs with small Hamming distances like this:

A (U*) B (U*) C (U*) D

... where A, B, C, D are fixed. This gives us lots of algs with minimal differences. I'm willing to bet that we can set this up so that you can convert a natural indexing scheme to U-directions.

I don't think that's very interesting, though.


----------



## Tempus (May 15, 2014)

Kirjava said:


> 61 is taking into account the x number of cases you'd need extra algs for. It would be 48 without.
> 
> I actually forgot that you could just invert the algs that were extra, so that would only require 55 because each has a pair.


I gather from this that the 48 also assumes that you are counting an algorithm and its inverse together as one algorithm. I, however, am not. By my reckoning, an algorithm is distinct from its inverse, its mirror, and it's mirror inverse. Together they would comprise 4 distinct algorithms. By this method of counting, I have now managed to lower the upper bound of n=67 to n=66, and I suspect it can go no lower than that.

Based on the output of my program so far, it is my gut instinct that 66 is the minimum number of algorithms required to solve the LL in 2 looks with AUF.


----------



## Tempus (Jun 9, 2014)

10461394944000 said:


> what is the minumum number of algorithms needed such that for any last layer case, you can apply 2 algorithms and solve the cube?


A few weeks have passed since my last message in this thread. During that time, I have run scenarios almost constantly, but the weather is getting too hot for non-stop processing. It burns about 100 extra watts to run, and that's heat I don't need right now, so I am discontinuing the work for the moment, but I think I've gathered some useful information with which to answer your question, and I've found some surprising results.

Firstly, I'm assuming that AUF is allowed both before and after algorithms. Secondly, I don't know whether you're counting the number of algorithms overall, or are you allowing an algorithm and it's inverse to count as one? Or are you perhaps allowing an algorithm and it's mirror to count as one? Maybe you're even counting an algorithm, its mirror, its inverse, _and_ its mirror-inverse to all count as one.

Since I did not know the answer, I calculated all four, running 24 hours a day. On three separate occasions, I found ways to radically increase the speed of my program, each time making it several times faster than before, so it should now be faster than it started out by about two orders of magnitude.

Here are the results in short form:

If you count each algorithm separately, you can do 2-Look Last Layer (2LLL) using just 63 algorithms. I found this rather surprising, as at one time I thought that 66 was the best that could be done.
If you count an algorithm and it's mirror together as one, you can do 2LLL in just 33 algorithms.
If you count an algorithm and it's inverse together as one, you can do 2LLL in just 33 algorithms.
If you count an algorithm, its mirror, its inverse, AND its mirror-inverse all as one algorithm, you can do 2LLL in just 19 algorithms.
If anyone is interested in examples of sets of algorithms that display these properties, let me know.




10461394944000 said:


> what about 3 look?


This one has a very interesting result because it seems to defy what's written in the wiki. Here's what the wiki says about 3LLL:


The Speedsolving.com Wiki said:


> The 3 Look Last Layer involves completing the last layer in 3 steps or looks. It usually consists of a 2-Look OLL, followed by a 1-Look PLL which requires knowledge of 31 algorithms in total (including mirrors) with an average of 31 moves. Another method is an LLEF followed by a OCLL-EPP and finally a CPLL which requires 26 algorithms in total (25 if excluding a possible reuse of an algorithm) with an average of 27 moves.


That would seem to indicate that the smallest set of algorithms anyone else has found that is sufficient to do 3-Look Last Layer is a set of 25. My program, however, has found that it is possible to do 3LLL using just ELEVEN.

Here is one such set:

U F2 R' F' R F' U2 F R U2 R' F' (12f)
U2 F U F' U' R' U F' R' F' R F2 U' F' R (15f)
U' F' R2 F' U F2 U F2 U2 F2 U F' R2 F U (15f)
U2 R' F' R U2 F R' F R F2 U2 R' F R U' (15f)
R U2 R U2 R' U F R2 U' R' U R' U R2 U F' (16f)
R U R2 F R2 U R' U' F2 U2 F (11f)
U R U R' F2 U2 F U F' U F2 R U' R' U (15f)
U R U2 R2 F R F' R U2 R2 F' U' F U R (15f)
R U2 R2 F' U' F U R U R U R' F' U' F U' (16f)
F' R' U' R U2 F2 R2 F' U' R U' F' U2 F R2 (15f)
U' F U F' U F' U' F2 R U' R' F2 U F U2 (15f)
I imagine the number would be even smaller if one counted mirrors and/or inverses as being the same.


----------



## Dane man (Jun 9, 2014)

> That would seem to indicate that the smallest set of algorithms anyone else has found that is sufficient to do 3-Look Last Layer is a set of 25. My program, however, has found that it is possible to do 3LLL using just ELEVEN.
> 
> Here is one such set:
> U F2 R' F' R F' U2 F R U2 R' F' (12f)
> ...


So are those based on a set of steps used to achieve the solved state, or are they simply algs that any combination required to solve the last layer stays within 3 steps?


----------



## Kirjava (Jun 9, 2014)

Tempus said:


> If anyone is interested in examples of sets of algorithms that display these properties, let me know.



Send me everything.


----------



## qqwref (Jun 9, 2014)

Tempus said:


> My program, however, has found that it is possible to do 3LLL using just ELEVEN.
> 
> Here is one such set:
> 
> ...


Very cool. I'd also be interested in having more of this information posted (maybe in a spoiler?).


----------



## MM99 (Jun 9, 2014)

28 or less if you count inverses and stuff with ZZ!


----------



## Tempus (Jun 10, 2014)

Dane man said:


> So are those based on a set of steps used to achieve the solved state, or are they simply algs that any combination required to solve the last layer stays within 3 steps?


The latter. They are simply a set of algorithms that, bracketed by AUFs as necessary, my computer has determined sufficient to get from any last layer state to any other last layer state in only three hops. Put in terms of graph theory, if you envision the 15,552 possible upper layer states as nodes of a graph, and these eleven algorithms as dictating the edges of that graph, then the resulting graph has a diameter of 3.

I do not personally know how one would apply them intelligently, but the computer would know just by being able to remember every possible scenario at once. Perhaps a set of guidelines or rules could be developed, but I have not tried to do so. I've just thrown processing power and tricky programming at it to try to answer the question of what's possible.




Kirjava said:


> Send me everything.





qqwref said:


> Very cool. I'd also be interested in having more of this information posted (maybe in a spoiler?).


As you wish.


Spoiler: A set of 63 algorithms that are sufficient to do 2LLL




U' R' U2 R U2 F R U R' U' F' R' U R (14f)
U' F U R' U2 R2 U R2 U F' R U R U' R' U (16f)
U2 F U R U' R' U F' R' F U' F' U R (14f)
U R F U' R' U2 R' U' R2 U R2 F' R' U2 (14f)
U R' U2 R U F R' F' U R U R U' R' (14f)
U R U R2 F R F2 U' F U F' U F U' (14f)
R' U F2 U' F R F' U R' F2 U' F' R U2 F2 (15f)
U F U R U' R2 F' U2 R U R U' R2 U2 R U' (16f)
U2 F R U R2 F' R' U2 R F R' U2 R2 U' F' U2 (16f)
R' U' R' F' U' F U R2 U2 R' U' R U (13f)
U' F R2 U F' R' F U' R2 F' U' R U R' (14f)
U F' U2 R2 F R U' R U' R2 U' F' U R2 F U' (16f)
R U2 R' F2 U' R U' R' F R U R' U F' R' F' R (17f)
U' F U R' F' R F' U F2 U2 R' F' R F' U F (16f)
R U2 R' F R' F R' F R' U2 R F' R F2 R (15f)
U2 F' U2 F2 R' F' R' F' U2 F U2 F R2 F' (14f)
U' R U2 R2 F U' R2 U' R2 U F' U R U2 (14f)
U2 R' F R F U R' U F2 U F2 U' F' R U2 F2 U2 (17f)
R' F R F2 U2 F2 R' F' R F' U F U2 F' U F (16f)
U2 R U R' U2 F2 R U2 R' U2 R' F2 R2 U R' U2 (16f)
U2 R2 F U F2 U F2 U' R2 U' R' F' U' R (14f)
U2 R' U' F' U F2 U R U' R2 F' R2 U R' U' R U (17f)
U' R' F' U' F2 R U' R' U2 R U R' F' R U (15f)
R F' R' U2 F U2 R' U' R F' R' U R2 F R' (15f)
U2 F' U R U R2 F R F2 U F2 U' F' U2 F U' (16f)
U' F R U' R2 U2 R2 U R2 U F R F' U' F' U' (16f)
R' U R' U' F R F' R' U R F' U F U' R (15f)
F' U2 F2 R' F' R U2 F R' F2 U' F U R U' (15f)
U F U R' F R F2 R' F U' F' U R (13f)
R' U2 F' R F' U2 F2 U2 F R2 F2 R U2 F R (15f)
U F U' R2 U' F2 U' F2 U2 R2 U F2 U2 F (14f)
U R' U' F U R U' R' F2 U F U' F' U' F R U' (17f)
U2 R' U2 R U2 F' U' F2 R' F' U R2 U2 R' U2 (15f)
F' R' F R F R2 U R' F' R' U' F2 R U F R' (16f)
F2 U' F2 U F U R' U' F2 R F R' U R (14f)
U' R U R' U R F U' R' U R U F' U R' (15f)
U' R' F2 U2 F R' U' F2 U F R U' F U' F2 R (16f)
U R' U' F U' R2 F2 R2 U2 R2 F2 R' U' R' F' R (16f)
U' F' R U' R F' R' U R F' U' F' U F2 R2 F (16f)
U' F U2 F' R' U' F U F' R U2 F R' F R F2 (16f)
R' F R U R' F' R F U' R U R' U' F' U (15f)
U R F U' F' U' R' F R' F' R2 F U2 F' R' U2 (16f)
U2 F U R U' R U R' U R F R' F' U2 R2 F' U' (17f)
U' R U R' F2 U' F U' F' U2 F U F R U' R' (16f)
R' U' F' U2 F2 U2 F2 U' F R U F U2 F' (14f)
U2 R U R' U R' F2 U' R2 U' R2 U F2 U R (15f)
U' F' R' U F U2 F U F2 U' F2 R F U' (14f)
U F R' U' R2 U' R2 U2 F R F2 U F U2 F' (15f)
U R' F2 U F R' F R F' U' F2 R U2 F U2 F' (16f)
U R U' R2 F' U' F U R2 U' R' U R U R' U2 (16f)
U R U2 R' U2 R' F R F' U2 R' F' U' F U R U (17f)
U R' F' U' F2 R' F' R U' R U2 R' U F R' F' R2 (17f)
U R' F R U R' U' F' R' F R F2 U F R U' (16f)
R' U2 R U F2 R' F2 U2 F U' F' U' F2 R F2 (15f)
U R2 F2 R2 F' U2 F R2 F2 R F R F2 U2 F (15f)
R' F' R2 U R' F' U R' F R U F2 R U2 R' U2 (16f)
U2 F R U2 R' F' U2 F R' F R F2 U (13f)
R F' U2 F U' R U2 R F R2 F' U R U2 R2 U (16f)
U R U R' U2 F' U2 F R U2 R' U F' U2 F (15f)
R U2 R2 F' U2 R' U2 R U2 F R' U2 R2 U2 R (15f)
U R' F2 U' F2 U F' R F' U2 F' R' F2 R U2 F (16f)
F R' F' R U2 R U' R2 F R F2 U F (13f)
R' F2 U2 R' F2 R U2 F2 U F' U2 F U' R U' (15f)






Spoiler: A set of 33 algorithms that, when combined with their mirrors, are sufficient to do 2LLL




U F U R' U' R' U R2 U' R2 F' U2 R' U2 R (15f)
U' R U2 R U2 F R2 F' U2 R2 U' R U R' U (15f)
U R' F' U' F R U R U2 F R' F' U2 R2 U2 R (16f)
U' R' F R U2 F' U R U R' F' U2 F2 U' F' U2 (16f)
U2 F' U2 F R2 F' U F2 R' F2 U' F R' U' R' (15f)
U R' U' F U R U' R' F2 U F U' F' U' F R U' (17f)
U2 R U2 R2 F R F2 U' F U R U2 R' F' U' F (16f)
F' U2 F R' F R F' U2 F R' F' R U' (13f)
F R' F' U F R U2 F' U' R2 F R2 U' R' F' R (16f)
U2 F' U2 F2 U2 F' U' F' R' F2 R F2 U' F' U2 (15f)
F' U F R' F R F' R U' R' U2 F' U2 F U (15f)
U R F U' R' U' R U2 F' R2 F R U' R' F' R (16f)
R U R' U2 F' U F R U' R' U R U' R' U2 (15f)
R' U' R' U' F U R2 U2 R' U R U R' F2 U F R (17f)
R' U F2 R2 U' F2 U F2 R' F2 R2 U2 R' U2 R2 F2 (16f)
R' U' R2 F' U2 F U2 F R2 F' U' F' U' F R (15f)
U2 F U R' U' R F' R' U R U2 (11f)
U R' U R' F R' F2 R' U' R F2 R F' R U' R U' (17f)
R' F R U R' U' F' R U' R' U2 R U' (13f)
R' F R2 U R' F2 U F2 U2 F' U2 R U' R' (14f)
R U R' U R U' R2 F R2 U R' U' F2 U2 F U (16f)
U2 F R U2 R2 U' F' U F R U2 F' (12f)
F U F' R' F U' F' U F' U' F U R U (14f)
F U' F' U R' F U F U F' U' F2 R U2 (14f)
U F U2 F2 U' F2 U' F' R' F R F2 U2 F (14f)
F R F' U2 F R' F2 U2 F2 R F' R' U F' U F (16f)
R U2 R' U2 F' U' F R2 U F' U R' U' R F U' R2 (17f)
U' F U2 F2 U' F2 U' F' U2 R U2 R' U' F' U' F (16f)
U' F' R2 F2 R U' R' U' R U' R' U2 F2 R2 F (15f)
U F' U2 F U F R U R' F2 U F2 U2 F' U' (15f)
U2 F U R' F R F2 U F U' R U' R' F' U' (15f)
U F R' F R F2 U F R U R' F' U2 (13f)
R' F R F2 U2 F2 R F' U R' U R F' U2 F R' (16f)






Spoiler: A set of 33 algorithms that, when combined with their inverses, are sufficient to do 2LLL




F' U2 F' U R' F U' F2 R U' R' U2 F2 R F2 U2 (16f)
U R2 F R F' R U R' F' U' F U' R (13f)
U F2 R F' U F R2 F' R2 F' U2 F U F R' F2 (16f)
U F U R' F2 R F2 R F2 R2 F' R2 F' R' U' F' (16f)
U F U2 F' R' U' F U R U2 F' U' F' U F U (16f)
U' R U F R' U' R2 F R' F' R' U F' U2 R' U2 (16f)
U R' U2 R2 U2 F R F' U2 R' U' R' F' U F R (16f)
U2 R U R2 F R F R U2 R' U F U R' F2 R (16f)
F U' R' U' R U F' U2 R' U R U' R' U2 R (15f)
U2 R U' R2 F2 U' R2 U' R2 U F2 U R2 U R' U2 (16f)
U2 R' F R2 U R' F' R' F R2 U' R2 F' R U (15f)
U' R' F' U F2 R2 F' U2 F' U2 F R2 U R (14f)
U F U R U' R' F' R' U2 R U R' U R (14f)
U F2 R' U' F R F2 R' U F R F2 R' F R (15f)
R U2 R' F2 R U2 R' U F' U' R U2 R' F U F (16f)
U2 R2 F U F U F' U' F' R2 U' R' F' R U (15f)
U' R2 F2 R' U2 R' U' R U' R F2 R2 U (13f)
U F' U' F' U' R' F' R U2 F U R' F R (14f)
U2 F R U' R2 F R F' U2 F' U2 F R U2 R' U' F' (17f)
R' F' R U' F' U' F U' R U' R' U' F U (14f)
F U R U R' U' F' U' R' U' F U F' R U2 (15f)
U' F R U R2 F R F' R' F R F2 U F U2 F' (16f)
R' U2 R2 U R2 U R U' F' U F R U' R' U (15f)
U2 R U' R' U2 R F U R2 F R2 F' U' F' R' (15f)
U R U F2 R' F2 U2 F U' R F' U' F U F' U' R' (17f)
U2 F U R U' R' F' U R' U' F' U F R U (15f)
U2 F U F2 U2 F R U' R' U' F U2 R' F' R U2 (16f)
F' U R' F R U2 F' U2 F2 U F' U' F U2 F U F (17f)
U' R F' R' F2 U2 F2 R F R2 F2 U2 F U2 F2 R (16f)
U F' U2 F R' F R U F2 U F2 U2 F' U (14f)
R F U' R2 U2 R U2 R U2 R U' F' R' (13f)
U' F U2 F2 U2 F' U F' U' F2 U2 F U2 R' F' R (16f)
U2 R F R' F' R U R' F' U' F R2 U' R2 U' (15f)






Spoiler: A set of 19 algorithms that, when combined with their mirrors, inverses, and mirror-inverses, are sufficient to do 2LLL




R U2 R2 F R2 U R' U' F2 U2 F R' F R F' (15f)
U' F R F' U2 F U R U' R' F' U F R' F' (15f)
U' F2 R' F' U R2 U R U' R U R U2 F R F2 (16f)
U2 R U2 R' F R' F' R2 U' R2 F R F2 U F U' (16f)
U' F U R U' R' U2 F' R F R' U2 R F' R' (15f)
R2 U' R' F R F2 U F2 R F' R U' (12f)
U' F U R' U F2 U' F2 U' R F2 U2 F U' (14f)
U2 R' U2 R U F R' F' U F R F' U2 (13f)
R U' R' F R' F R2 F U F' R2 F2 R U (14f)
U' R' F2 R2 U' F U' F' U2 R2 F2 R U (13f)
U2 F' U2 R' U F' R' F' R F2 U' F' R U F (15f)
R' U2 R2 U R F' U' F U R' U2 R' U R' U2 R (16f)
U2 F' U2 F U' F U R' U' F' U R F' U2 F (15f)
U2 R2 U2 R' U R U' R U2 R F' U2 F R (14f)
U2 R' F2 U' R F R' U F2 R U F' U' F (14f)
R U2 R' U R U' R2 U2 R2 U F R2 F' R2 U R (16f)
F U R U F R' F' U F R F' U R' U F' (15f)
U' F' U' F R U2 F' U' F U F R' F2 U F (15f)
U' F2 R2 F2 U F U F R2 F2 U' F R2 F' U (15f)



For what it's worth, the program that generated these does not care about the average number of moves needed, only the number of algorithms, so these are probably not optimal for designing a practical solution system. Still, if anyone comes up with anything new using these, I'll be interested to see it.




MM99 said:


> 28 or less if you count inverses and stuff with ZZ!


I'm not sure as to your meaning when you say "inverses and stuff". By "stuff", do you mean mirrors? If you are saying that ZZ can get from any LL state to any other in 2 looks using a set of 28 algorithms and their mirrors/inverses/mirror-inverses, then I suppose my reply would be that 19 is smaller than 28. If you meant something else, please clarify.


----------



## Kirjava (Jun 10, 2014)

Tempus said:


> I do not personally know how one would apply them intelligently, but the computer would know just by being able to remember every possible scenario at once. Perhaps a set of guidelines or rules could be developed, but I have not tried to do so. I've just thrown processing power and tricky programming at it to try to answer the question of what's possible.



This is what I was attempting to do with my LL method. I have ideas and ways of getting it done but the sheer number of cases requires a huge time investment to finish it.

It's a shame that those 19 cases are not very nice to execute. Would you be able to generate other groups? Would you be able to force certain algs to appear in the lists?


----------



## Tempus (Jun 10, 2014)

Kirjava said:


> It's a shame that those 19 cases are not very nice to execute. Would you be able to generate other groups? Would you be able to force certain algs to appear in the lists?


With some small changes to the code, that should be possible, but the more you micromanage the computer's decisions, the more likely it becomes that it will require more than 19 algorithms. I'll try to work up a version that allows the input of a list of forced algorithms, such that the machine will only try to decide the remaining ones.

Meanwhile, you can figure out a list of algorithms you'd like to force, and which one of the various questions you want them forced into. In other words, are we talking about 2LLL or 3LLL? Are we counting all algorithms as distinct, or are we allowing mirrors, inverses, or both? Please be precise; I hate ambiguity.

EDIT: On second thought, the 19 you mentioned implies that you meant 2LLL and that you meant to include both mirrors and inverses.

EDIT #2: The new forced-algorithm version of the program is written, and I am currently testing it.

EDIT #3: As a test, I forced it to include these two algorithms:
F R U R' U' F' (6f)
R U R' U R U2 R' (7f)
It succeeded in finding an appropriate set of 19 that includes the two algorithms that I forced on it:
F R U R' U' F' (6f)
R U R' U R U2 R' (7f)
U F R' F' R F R' F' R F' U' F U F' U' F (16f)
U F R' F' U2 R' U2 F' U F R U F R2 F' U (16f)
F R U R' U2 R' U R U F' U R' U R U' (15f)
R' F R' U' R2 U' R2 U F2 U F2 R F' R U2 (15f)
U' R U R2 F' U F U2 F' U' F R2 U2 R' (14f)
R' U F' R U R2 F R2 U2 R' U' F2 U2 F2 R U2 (16f)
U F2 U R' F' R U' F' U' F U F U' F2 U' (15f)
F U R U' R' F' R' F' U' F U F' U' F U R (16f)
U2 R' U2 F2 U2 F2 U2 F U2 F' U2 F U2 F2 R (15f)
R2 U R' U2 F' U2 R' U R F2 R2 F' R' U2 R2 (15f)
U' R U' F U' R F R F' U' R2 U R F' U2 R' (16f)
U F' U F U R U2 R' U2 F' U F U' F' U2 F (16f)
U R' U F' R' U2 R U2 F U' R' U R2 U2 (14f)
U R U R2 F R F2 U' F U2 F R' F' R U2 (15f)
U F U R2 U2 R F' R' U2 R F R U' F' (14f)
U R' U F U' F R F2 U2 F' R' U F2 R F U2 (16f)
F U R U' R2 F R F2 U F R U R' F' U2 (15f)


----------



## Kirjava (Jun 10, 2014)

Tempus said:


> With some small changes to the code, that should be possible, but the more you micromanage the computer's decisions, the more likely it becomes that it will require more than 19 algorithms. I'll try to work up a version that allows the input of a list of forced algorithms, such that the machine will only try to decide the remaining ones.



More than 19 is not a problem at all. When I tried this I used about 30. We want to minimise the amount of 'extra' algs as much as possible (they don't look very good).



Tempus said:


> EDIT: On second thought, the 19 you mentioned implies that you meant 2LLL and that you meant to include both mirrors and inverses.



Yep.

How about; 



Spoiler



RUR'URU2R'
RUR'URU'R'URU2R' 
rUR'URU2r'
RUR2FRF2UF
R'U2R2UR2URU'RU'R'
FR'F'RU2RU2R'
FRUR'U'F'
FRUR'U'RUR'U'F'
RU'L'UR'U'L
RB'RF2R'BRF2R2
M2U'MU2M'U'M2
M2UM2U2M2UM2
RUR'U'R'FRF'
rUR'U'r'FRF'
R'FRUR'U'F'UR
R'U'R'FRF'UR
F'LFL'U'L'UL
RUR'U'M'URU'r'
M'UMU2M'UM
RU2R2'U'R2U'R2'U2'R


----------



## AvGalen (Jun 10, 2014)

I like this idea a lot.
* having a small list of, lets say, 15 of your favorite algs
* to which you only have to add, lets say 5-10 a few other algs
* to always have a 2 look last layer.
I think it would be doable to learn all combinations of those algs actually. The goal of course is
* to have the 15 algs be perfect, even when mirrored/inversed
* to have the 5-10 extra algs be really good as well (and closer to 5 than 10). Not one stinker

The problem would be of course the cases where you already know a 1 look alg that is bad (Let's say N-Perm for example). With this new system it might become a 2 look that is still slower than the 1 look bad alg that you already know


----------



## Carrot (Jun 10, 2014)

AvGalen said:


> The problem would be of course the cases where you already know a 1 look alg that is bad (Let's say N-Perm for example). With this new system it might become a 2 look that is still slower than the 1 look bad alg that you already know



why would you unlearn 1-look cases you already know? that sounds like a stupid approach to me.


----------



## elrog (Jun 10, 2014)

I think it is great that progress is finally being made on this idea, but I don't see why we try to cover all LL cases. It is ridiculously easy to make sure you finish F2L with the last layer only having 2 unoriented edges every time. It is also easy to make sure you always have at least 1 corner oriented. Phasing is also a really easy option to reduce cases. I know you can't do all of these at once, but any 1 of them isn't hard at all.

I would also like to see this done with ZBLL. Of course you could always orient 1 corner or do phasing here as well.


----------



## Tempus (Jun 13, 2014)

Kirjava said:


> More than 19 is not a problem at all. When I tried this I used about 30. We want to minimise the amount of 'extra' algs as much as possible (they don't look very good).


The problem I see is that by choosing algorithms that are especially easy/quick to perform, you are probably choosing algorithms that are similar to each other in some way that is hard to define.

By way of analogy, if one considered all the paths one might walk from one's house, and ranked them all on how easy they are to walk, and decided to use only the paths deemed especially easy, the result would likely be that you always proceed downhill, and have no way reach the destinations that are uphill from your house. Within this analogy, my program would suggest uphill paths to you, which you would then find distasteful because they are harder to walk than the ones to which you have become accustomed.



Kirjava said:


> How about;
> 
> 
> 
> ...


Okay, over the last couple days I have been running scenarios using this list. I had to translate the notation because my program assumes a stationary core, and your wide turns and slice moves therefore make no sense to it, so when you see that some of the first 20 algorithms look different from yours, that is why. They're not actually different, just expressed in a more rigid language.

When your list of 20 algorithms is forced on the program, it is clearly unable to do 2LLL using 26 or fewer. It gets tantalizingly close to success with 27, but thus far it has not quite managed it. 28 appears to be easy for it. Here are three sets of 28, each of which begins with your list of 20 and is sufficient for 2LLL:


Spoiler: Set #1




R U R' U R U2 R' (7f)
R U R' U R U' R' U R U2 R' (11f)
L F R' F R F2 L' (7f)
R U R2 F R F2 U F (8f)
R' U2 R2 U R2 U R U' R U' R' (11f)
F R' F' R U2 R U2 R' (8f)
F R U R' U' F' (6f)
F R U R' U' R U R' U' F' (10f)
R U' L' U R' U' L (7f)
R B' R F2 R' B R F2 R2 (9f)
R2 L2 D' R L' F2 R' L D' R2 L2 (11f)
R2 L2 D R2 L2 U2 R2 L2 D R2 L2 (11f)
R U R' U' R' F R F' (8f)
L F R' F' L' F R F' (8f)
R' F R U R' U' F' U R (9f)
R' U' R' F R F' U R (8f)
F' L F L' U' L' U L (8f)
R U R' U' R' L F R F' L' (10f)
R' L F R L' U2 R' L F R L' (11f)
R U2 R2 U' R2 U' R2 U2 R (9f)
U' R U' R F2 U' F R F' R' U2 R U' F2 R2 (15f)
U2 R' U R U' R2 F R2 U R' U' F' R (13f)
U F U2 F' U' R' F R U' R' F' R U (13f)
U2 F' U2 F U2 F2 U F' R' F U' F U F R U2 (16f)
U R2 F2 R2 F U F' R' U' F' U R' F2 R2 U (15f)
F2 U' F2 R U' R2 F' R2 U' R' F R U2 R' U' (15f)
U2 F R' F' U' F R' U' R' U R U R2 U R' U' F' (17f)
R2 U' F' R' U F' U' F U' R U2 F2 R F' U2 R (16f)






Spoiler: Set #2




R U R' U R U2 R' (7f)
R U R' U R U' R' U R U2 R' (11f)
L F R' F R F2 L' (7f)
R U R2 F R F2 U F (8f)
R' U2 R2 U R2 U R U' R U' R' (11f)
F R' F' R U2 R U2 R' (8f)
F R U R' U' F' (6f)
F R U R' U' R U R' U' F' (10f)
R U' L' U R' U' L (7f)
R B' R F2 R' B R F2 R2 (9f)
R2 L2 D' R L' F2 R' L D' R2 L2 (11f)
R2 L2 D R2 L2 U2 R2 L2 D R2 L2 (11f)
R U R' U' R' F R F' (8f)
L F R' F' L' F R F' (8f)
R' F R U R' U' F' U R (9f)
R' U' R' F R F' U R (8f)
F' L F L' U' L' U L (8f)
R U R' U' R' L F R F' L' (10f)
R' L F R L' U2 R' L F R L' (11f)
R U2 R2 U' R2 U' R2 U2 R (9f)
R' F2 R2 U2 R' U F2 U' F2 U' R U' R2 F2 R (15f)
F' U R' F2 U2 F' U2 F' U2 F2 U2 F R U' F (15f)
U2 R2 F2 U R' U2 R F R' F' U F2 R' U R' U2 (16f)
R U R' U F R2 F2 R' U2 R F2 R F' R U (15f)
U2 F' U F U R' U' F' U F2 R F2 U2 F (14f)
F R' F' R U' R' U' F U R U' F' R U2 R' (15f)
F' U' R U2 F2 R2 F' R2 U2 R U R' F' R' F (15f)
F U' F' U2 F U F' R U2 F R' F R F2 U2 R' (16f)






Spoiler: Set #3




R U R' U R U2 R' (7f)
R U R' U R U' R' U R U2 R' (11f)
L F R' F R F2 L' (7f)
R U R2 F R F2 U F (8f)
R' U2 R2 U R2 U R U' R U' R' (11f)
F R' F' R U2 R U2 R' (8f)
F R U R' U' F' (6f)
F R U R' U' R U R' U' F' (10f)
R U' L' U R' U' L (7f)
R B' R F2 R' B R F2 R2 (9f)
R2 L2 D' R L' F2 R' L D' R2 L2 (11f)
R2 L2 D R2 L2 U2 R2 L2 D R2 L2 (11f)
R U R' U' R' F R F' (8f)
L F R' F' L' F R F' (8f)
R' F R U R' U' F' U R (9f)
R' U' R' F R F' U R (8f)
F' L F L' U' L' U L (8f)
R U R' U' R' L F R F' L' (10f)
R' L F R L' U2 R' L F R L' (11f)
R U2 R2 U' R2 U' R2 U2 R (9f)
U' F R' F' R U2 R' U' R2 U' R2 U2 R U' (14f)
U2 F' U R' F R U' F' U' F' U F' R' F' R U (16f)
R' F R2 F' U2 F' U2 F R' U2 R' F' U' F U R (16f)
U R2 F' R U R U2 R' F2 R' F2 R U R' F R2 (16f)
U2 F U F R' F' U' F U F' R F U' F2 (14f)
U' R' F R F2 U R' U' R F2 R' F' U R U' (15f)
F2 R F' U' R' F' U F2 R' F2 R' U2 R F R F' (16f)
U' R' U R U2 R2 U' F' U F R U R U' (14f)



In each case, the first twenty are the ones you provided, in some cases re-translated into a form the program will understand. The fact that it cannot improve on 28 highlights just how thematically similar your algorithms are to each other.




elrog said:


> I think it is great that progress is finally being made on this idea, but I don't see why we try to cover all LL cases. It is ridiculously easy to make sure you finish F2L with the last layer only having 2 unoriented edges every time.


I think the reason is that this is the Puzzle Theory subforum, not the Puzzle Practice subforum, and theorists tend to be more interested in cases that seem more conceptually pure and complete. For this reason, an analysis of the last layer is less interesting than an analysis of the cube as a whole, and an analysis of an arbitrary subset of last layer cases is even less interesting than that. I will take the idea under consideration, however.


----------



## Kirjava (Jun 13, 2014)

First of all, thanks for looking into this!  Much appreciated!



Tempus said:


> The problem I see is that by choosing algorithms that are especially easy/quick to perform, you are probably choosing algorithms that are similar to each other in some way that is hard to define.



I hadn't even considered this - I think when I made my initial list I tried to include algs that were 'versatile' but hadn't considered distinctivity. Thanks for the insight.



Tempus said:


> Okay, over the last couple days I have been running scenarios using this list. I had to translate the notation because my program assumes a stationary core, and your wide turns and slice moves therefore make no sense to it, so when you see that some of the first 20 algorithms look different from yours, that is why. They're not actually different, just expressed in a more rigid language.



It's cool bro I understand what fixed centre notation is. I should've considered this before giving you a list.



Tempus said:


> When your list of 20 algorithms is forced on the program, it is clearly unable to do 2LLL using 26 or fewer. It gets tantalizingly close to success with 27, but thus far it has not quite managed it. 28 appears to be easy for it. Here are three sets of 28, each of which begins with your list of 20 and is sufficient for 2LLL:



This is amazing stuff. Tempted to recreate this with some of these subsets. 

I was actually working on the final draft where the cases are organised in a learnable format. Might start again now.



Tempus said:


> The fact that it cannot improve on 28 highlights just how thematically similar your algorithms are to each other.



I could try and find a more varied set of algs? Would you distribute your source?



Tempus said:


> I will take the idea under consideration, however.



I'm happy to turn this into an actual method. I've done it before. (it's a lot of work)


----------



## Tempus (Jun 14, 2014)

Kirjava said:


> First of all, thanks for looking into this!  Much appreciated!


No problemo. :tu As I said near the beginning of this thread, I think this is a fascinating question.



Kirjava said:


> I hadn't even considered this - I think when I made my initial list I tried to include algs that were 'versatile' but hadn't considered distinctivity. Thanks for the insight.


You're welcome. For what it's worth, I have identified a significant flaw in your list. Algorithms #13 and #17 are mirror-inverses of each other, so you can remove #17 and have no effect on the coverage. This reduces the number of algorithms needed from 28 to 27.

Today I've been working on an experimental scoring system for redundancy in a forced algorithm list. It scores one point for each way that it can find to express an algorithm from the list in terms of two algorithms from the same list. I tried running it on your list (with #17 removed) and it got a score of 844. For comparison, when I feed it my program's best list of 19 algorithms, the score is just 116.

But wait, there's more. After days of processing, and a bit of luck, my program has finally managed to find an n=27 soltuion for your original list of forced algorithms. Removing the offending #17, this means we now have an n=26 solution which includes your favorite algorithms...


Spoiler: ...and here it is:




R U R' U R U2 R' (7f)
R U R' U R U' R' U R U2 R' (11f)
L F R' F R F2 L' (7f)
R U R2 F R F2 U F (8f)
R' U2 R2 U R2 U R U' R U' R' (11f)
F R' F' R U2 R U2 R' (8f)
F R U R' U' F' (6f)
F R U R' U' R U R' U' F' (10f)
R U' L' U R' U' L (7f)
R B' R F2 R' B R F2 R2 (9f)
R2 L2 D' R L' F2 R' L D' R2 L2 (11f)
R2 L2 D R2 L2 U2 R2 L2 D R2 L2 (11f)
R U R' U' R' F R F' (8f)
L F R' F' L' F R F' (8f)
R' F R U R' U' F' U R (9f)
R' U' R' F R F' U R (8f)
R U R' U' R' L F R F' L' (10f)
R' L F R L' U2 R' L F R L' (11f)
R U2 R2 U' R2 U' R2 U2 R (9f)
U F' U' F2 R' F' R2 U' R' U' R' U' F' U F R (16f)
U F U2 F2 U' R' F2 R F R U' R' U2 F U2 F' (16f)
F U2 R' U' R F' R' U2 F U F' U' R U2 (14f)
U R2 F' R U R U2 R' F2 R' F2 R U R' F R2 U' (17f)
U R2 U' R F2 R2 F' R U2 F U F' U' R F2 R (16f)
F R2 U2 F R F' R2 U' F' U2 F R2 U' R2 U F' (16f)
U' F2 R2 F' U R' U' F' R' U R F2 R2 F2 (14f)



The computer generated algorithms are listed in red.



Kirjava said:


> This is amazing stuff. Tempted to recreate this with some of these subsets.
> 
> I was actually working on the final draft where the cases are organised in a learnable format. Might start again now.


If you do, you'll probably want to use the n=26 list given above. I'm certain the total algorithm count cannot be lowered beyond this point while still including the 19 _distinct_ algorithms you favor.



Kirjava said:


> I could try and find a more varied set of algs? Would you distribute your source?


*Sigh*...You are not the first person to ask for my code since I began participating in this thread, and you may not be the last, so I will publicly voice my opinion about this issue.

There are a litany of reasons that I do not at this time intend to distribute the program to others, including but not limited to these:


*It is built using four different reusable code modules that I also wrote for my own use*, and it would be impossible to distribute the program without also distributing those.
I am secretive by nature.
I don't like to show my code to others because it is ugly.
On the rare occasion that I _do_ distribute my code, it is code that was written from the ground up with distribution in mind. This is not that code.
The user interface barely even exists. (When I want to change a setting, I have to recompile it. Yes, it's just that bad.)
I don't need the extra liability.
In the wrong hands, it could be used for evil. (Evil cubing? Well, maybe not...)
It just makes me uncomfortable.
I figure I can work with you (and perhaps others) on this forum, whenever I have the time and inclination. I would prefer to keep said cooperation in public threads, however. I just finished installing a new air conditioner, so I should hopefully be able to carry on processing until the weather gets truly vicious.



Kirjava said:


> I'm happy to turn this into an actual method. I've done it before. (it's a lot of work)


Personally, I find it hard to believe that it could be fast in practice. I say this because it seems to me that it would be easier to memorize more algorithms (i.e. full OLL and full PLL) than it would be to deal with the issue of how to learn to rapidly recognize which one of a smaller set of algorithms to apply in a system lacking the orderly distinction between orientation and permutation. But if you want to give it a try, by all means, have at it. If it catches on, you can credit me as your assistant.


----------



## Kirjava (Jun 16, 2014)

Tempus said:


> If you do, you'll probably want to use the n=26 list given above. I'm certain the total algorithm count cannot be lowered beyond this point while still including the 19 _distinct_ algorithms you favor.



Thanks for the updated list. I need to sink some time into looking deeper into the results.

Joey brought up something interesting at the competition this weekend. Is 19 the minimum number of algorithms required to solve a two look last layer, or is it the minimum number of last layer algorithms to solve a two look last layer?

That is, could algorithms that influence F2L produce a shorter list?



Tempus said:


> I do not at this time intend to distribute the program to others



It's cool bro.



Tempus said:


> Personally, I find it hard to believe that it could be fast in practice. I say this because it seems to me that it would be easier to memorize more algorithms (i.e. full OLL and full PLL) than it would be to deal with the issue of how to learn to rapidly recognize which one of a smaller set of algorithms to apply in a system lacking the orderly distinction between orientation and permutation. But if you want to give it a try, by all means, have at it. If it catches on, you can credit me as your assistant.



Easier, certainly - but potentially not faster. The only difficulty with a system like this lies in the learning of it. I have ideas and tricks that make it easier, but have never managed to document it to the extent that it is easily learnable. I think it is possible, but again, time will need to be sunk in to actually test that idea.

I think the first thing I need to do is recode my 2LL solver. Added to the todo list :U


----------



## Tempus (Jun 18, 2014)

elrog said:


> I think it is great that progress is finally being made on this idea, but I don't see why we try to cover all LL cases. It is ridiculously easy to make sure you finish F2L with the last layer only having 2 unoriented edges every time.


Okay, I've been working on this problem. I wrote in an option for edge control, and I have managed, after days of processing, to come up with two sets of 17 algorithms, each of which, together with their mirrors, inverses, and mirror-inverses, is sufficient to solve the last layer *if and only if partial edge control has been used*, such that *at least two edges are correctly oriented*. Here they are.


Spoiler: Set #1




R' U' F U F' R F U' R U R' U' F' U (14f)
U2 R2 U F U' F' U' R F' U' F2 U' F2 U2 F R (16f)
F U2 R2 U R2 U' R2 U R2 U2 R2 U2 R2 F' (14f)
F R F' R U2 R' U' F' R' U R U2 F U' R' (15f)
U F2 U F2 U F2 U2 F2 U' F U' R U2 R' U F' (16f)
U F2 R2 F2 R' U2 R U2 F R F' U2 F2 R2 F2 U' (16f)
U R2 F2 U R2 F U F' R2 F U' F' U' F2 R2 (15f)
R' F2 U F' U F R' F' R U' F U' F2 R (14f)
U F2 U F U2 F U2 F' U F' U2 F2 U' F' U F (16f)
F R' F' U2 F R F' R' U R U2 R' U' R U (15f)
R' U' R' F' U' F U R2 U2 R' U' R U' (13f)
U2 F U2 F' U' R' F U' F' R U' R' U2 R U2 (15f)
R U R' U F' R' F' U' F2 U F' R U F (14f)
U' R' F2 R2 F U' F' R2 F' R F' R U R' U2 (15f)
U' F2 R' F' R F' U2 F U' F' U2 F U F' U (15f)
U2 F2 U' R F U2 F' U2 R' U F U' F U2 (14f)
U' F' U' F2 R U R' F2 U F2 U2 F' U2 (13f)






Spoiler: Set #2




U F2 U R U' R' F' U' F' U2 F2 U R U' R' F2 (16f)
U R' F' U' F U R U R U R2 F R F2 U F U (17f)
U2 R U2 R2 U2 R2 U R F2 R F2 U F R U F' U (17f)
U R' U R2 U2 F R2 F' U2 R2 U R U' R' U' R (16f)
F R' F' R2 U2 R2 F R F2 U' F U2 F' U' F U' (16f)
U2 R' U' R U F R2 U R2 U' R2 F' U' R' U2 R (16f)
R' U2 F R U R' U' F2 U F U R U' (13f)
U2 F U2 R' U R' U R' U' R2 U R' U2 R U2 F' (16f)
U F R' F' U2 R U R' U R2 U2 R' U (13f)
U2 R' F' U' F U R F' U' F R U2 R' F' U F U (17f)
U' R' F2 R U' F' R U2 R' F U2 F' R U' R' F2 (16f)
U R U R2 U R U R F R' F' U R2 U R U2 (16f)
U2 R' F R' F2 U F2 R2 F2 U2 F2 R F' R U R2 (16f)
U' R2 U2 R' U R U' R U2 R F' U2 F R (14f)
R' U' R U' F R' U' R' U' R' U R2 U F' (14f)
U F R' F' U F R' F' R U' F' U' F R2 U2 R' (16f)
U2 R' U' R F R' U' R2 U R2 U R F' U2 (14f)



You're welcome. 



elrog said:


> It is also easy to make sure you always have at least 1 corner oriented. Phasing is also a really easy option to reduce cases. I know you can't do all of these at once, but any 1 of them isn't hard at all.
> 
> I would also like to see this done with ZBLL. Of course you could always orient 1 corner or do phasing here as well.


Sorry, but I'm actually relatively new to the speedcubing community, so I'm not well-versed in all the various solution systems and acronyms of which you speak.




Kirjava said:


> Thanks for the updated list. I need to sink some time into looking deeper into the results.


You're welcome, and good luck.



Kirjava said:


> Joey brought up something interesting at the competition this weekend. Is 19 the minimum number of algorithms required to solve a two look last layer, or is it the minimum number of last layer algorithms to solve a two look last layer?
> 
> That is, could algorithms that influence F2L produce a shorter list?


No offence to your friend intended, but I think that this is impossible. It's a shame that I can't think of any way to conclusively prove it, but I can certainly think of ways to statistically suggest it.

Let's say you're doing 2LLL using a set of n algorithms and their mirrors/inverses/mirror-inverses, all of which leave the F2L as they found it. In this case, you can apply 4 types of AUF, followed by one algorithm any one of 4 ways, followed by one of 4 types of AUF, and at this point it may already be solved. If it isn't, then you do one more algorithm any one of 4 ways and one more AUF. Taking into account the possibility of a complete LL skip, the total theoretical limit to coverage for that algorithm set is 1024*n^2+64*n+4. (Obviously it won't really work out that way because some combinations of algorithms will inevitably have the same effect as other combinations of algorithms.)

Now let's imagine a set of algorithms, half of which disturb the F2L and half of which do not. In this case, you can never apply one algorithm from each half because the result will always be a messed up F2L. This fact cuts your options in half right away, and there are other limiting factors which should become clear in a moment.

Now let's imagine a set of algorithms that all disturb the F2L by swapping some pieces in the F2L with some pieces in the LL. For maximum versatility and mobility, and to avoid the above pitfall, they would all have to swap the exact same F2L pieces to the exact same positions in the LL. The result of this, however, is that you lose the opportunity to do AUF between the two algorithms, resulting in a 75% cut to versatility and mobility. Furthermore, it would be a complete waste because one can just mentally swap the F2L positions and the LL positions in question, and thereby see that there must exist a set of n LL-only algorithms which have the same effect, but which would allow AUF in between.

Now let's imagine a set of algorithms that all disturb the F2L only by rearranging the F2L within itself, leaving all LL pieces on the LL and all F2L piece in the F2L. In the simplest case, imagine that all the algorithms just flip the FR edge and do nothing else to the F2L. This is better than the above scenarios, because you can use any two algorithms and you can still do AUF between them, but the coverage should still be smaller because it is now impossible to ever solve the LL using only one algorithm. This removes the middle 64*n term of the original polynomial, leaving a result of 1024*n^2+4 instead of 1024*n^2+64*n+4.

In every case there is a reduction in the theoretical mobility and versatility of the set by having it disturb the F2L. The only way I can possibly see it improving *ANYTHING* is if there were some unseen advantage to breaking LL parity between algorithms, but I can't imagine any reason why this would be so.

Can you? Can he?



Kirjava said:


> Easier, certainly - but potentially not faster. The only difficulty with a system like this lies in the learning of it. I have ideas and tricks that make it easier, but have never managed to document it to the extent that it is easily learnable. I think it is possible, but again, time will need to be sunk in to actually test that idea.


I'm not talking about the difficulty of learning it, but of speedy recognition. I figure that the speed of a solve relies on three things: speed of turning, speed of recognition, and efficiency of the solution system in reducing the number of turns. Some systems are very efficient in terms of turn count, but very slow in terms of recognition. Some are super-fast in terms of recognition, but require significantly more turns. The goal is a system that both reduces turn count _AND_ aids rapid recognition, and I suspect that these efforts to reduce the number of algorithms actually work against that goal, slowing recognition and increasing turn count.

Don't get me wrong. I'm still interested from a theory standpoint, but I don't believe the result could ever be practical for me. I have trouble just finding the next F2L pair. 



Kirjava said:


> I think the first thing I need to do is recode my 2LL solver. Added to the todo list :U


Perhaps this is a dumb question, but what does it do? "LL Solver" implies that it outputs a solution for a given LL position, but the 2 suggests that it doesn't. Does it perhaps try to solve a LL position using just two algorithms chosen from a provided list?


----------



## Kirjava (Jun 18, 2014)

Tempus said:


> You're welcome, and good luck.



Thanks. I started working on this again last night, I'll probably bump this thread/my thread when I reach a development worth posting.



Tempus said:


> In every case there is a reduction in the theoretical mobility and versatility of the set by having it disturb the F2L. The only way I can possibly see it improving *ANYTHING* is if there were some unseen advantage to breaking LL parity between algorithms, but I can't imagine any reason why this would be so.



Thanks for your analysis. After reading I'm inclined to think that this will not reduce alg count whatsoever. I don't have anything else to add.



Tempus said:


> I'm not talking about the difficulty of learning it, but of speedy recognition. I figure that the speed of a solve relies on three things: speed of turning, speed of recognition, and efficiency of the solution system in reducing the number of turns. Some systems are very efficient in terms of turn count, but very slow in terms of recognition. Some are super-fast in terms of recognition, but require significantly more turns. The goal is a system that both reduces turn count _AND_ aids rapid recognition, and I suspect that these efforts to reduce the number of algorithms actually work against that goal, slowing recognition and increasing turn count.



This system would be different from LL systems with large algorithm subsets. I'm inclined to think that the problem they have is algorithm recall, not recognition. The difficulty lies in remembering an alg from a pool of 400. There are more than just those two variables at play, and I think recognition isn't the problem you originally thought. There are also tricks with systems like this that can help aid recognition.

You can see more reasoning in my thread.

I think the biggest problem at the moment (aside from having to reorganise everything) is the 'bad' algs. I've considered generating more speed-optimal solutions, but haven't tried it yet.



Tempus said:


> Don't get me wrong. I'm still interested from a theory standpoint, but I don't believe the result could ever be practical for me. I have trouble just finding the next F2L pair.



Method development is reaching a point where you need to push into using more abstract concepts and trying ambitious weird things. Step concatenation and other standard structures have maxed out their usefulness and we need to do something new and different to improve on what we already have. I believe if I can circumvent problems with techniques like this (in this case with clever case sorting) they can prove to be a viable alternative.



Tempus said:


> Perhaps this is a dumb question, but what does it do? "LL Solver" implies that it outputs a solution for a given LL position, but the 2 suggests that it doesn't. Does it perhaps try to solve a LL position using just two algorithms chosen from a provided list?



Pretty much, I used an old version to automate the creation of this, but it isn't very useful at this point. I think a rewrite would be beneficial and there's just extra stuff I wanna add to it.


----------



## Tempus (Jun 18, 2014)

*UPDATE:* After interminable processing, I have managed to find a single case where 18 algorithms (and their mirrors, inverses, and mirror-inverses) are sufficient to solve any last-layer position in two looks.

*Ladies and gentlemen, I give you the smallest complete 2LLL algorithm set in history!*
U2 R2 U' R' F U R U' R2 F' R2 F R F' R2 U' (16f)
R U2 R2 U2 R2 U R' F R' F' U R U2 R U R' (16f)
F U R' U' F' U' F U R U' F' R' U2 R (14f)
U2 F U2 F' R F R' U2 R F' R' U2 (12f)
U2 R' U' F U R U' R' F' R U (11f)
U' R' F R U2 F' U' F U' F2 U2 F (12f)
R' U' R F U R2 U' F' R U R' U' R' U (14f)
F U R2 U' F' U R' U' F R' F' R' U R U' R' (16f)
F R2 U R' U' R' U2 F R F' R' U' R U' R' F' (16f)
U2 R U2 R' F R' F' R U' F' U' F U R U2 R' (16f)
U R' U2 R U R' F U R U' R' F' R (13f)
U2 F' U R' U F2 U F2 U' F2 R U2 F (13f)
U' R' U' F U R F U F2 U' F' U' F2 U2 F (15f)
U R' U' R F R' U R U' R' F' R U2 R U2 R' (16f)
R U2 R2 F' U' R' U R U' R' U' R F R U R (16f)
U R2 U' F U R2 U' R' F' R2 U2 F R F' U2 R' U (17f)
U2 R U2 R' U' R U' R' F R U R' U' F' (14f)
U R' U' F' R' F U' F2 U F R U' F U2 R (15f)


----------



## Renslay (Jun 18, 2014)

Tempus said:


> *UPDATE:* After interminable processing, I have managed to find a single case where 18 algorithms (and their mirrors, inverses, and mirror-inverses) are sufficient to solve any last-layer position in two looks.
> 
> *Ladies and gentlemen, I give you the smallest complete 2LLL algorithm set in history!*



How to use it?


----------



## Tempus (Jun 18, 2014)

Renslay said:


> How to use it?


Unfortunately, instructions for its use would basically be a list of 15551 distinct unsolved upper layer states, each followed by a number from 1 to 18 and an optional M (for mirror) and/or I (for Inverse), indicating which algorithm to apply in that situation. One would AUF until they could find their LL state in the list, and then apply the algorithm indicated. If it was not already solved, they would repeat the process once, and the LL would be solved once they did a final AUF.

This would be a long document, and too much, I think, for this forum. Especially when I don't know how to concisely describe a last layer state.


----------



## Renslay (Jun 18, 2014)

Tempus said:


> Unfortunately, instructions for its use would basically be a list of 15551 distinct unsolved upper layer states, each followed by a number from 1 to 18 and an optional M (for mirror) and/or I (for Inverse), indicating which algorithm to apply in that situation. One would AUF until they could find their LL state in the list, and then apply the algorithm indicated. If it was not already solved, they would repeat the process once, and the LL would be solved once they did a final AUF.
> 
> This would be a long document, and too much, I think, for this forum. Especially when I don't know how to concisely describe a last layer state.



Ah, I see, so it's a theoretical / computational set. Still, it is good to know what can be the lower bound for a human solution; or even it can be part of an LBL computer solving method.


----------



## Christopher Mowla (Jun 18, 2014)

Tempus said:


> This would be a long document, and too much, I think, for this forum. Especially when I don't know how to concisely describe a last layer state.


If you have others reasons as to why you do not want to post these instructions with your alg set (perhaps you want to publish this result in a journal?) then I understand completely. However, we have no reason to believe that this is true (I personally believe your set does do what you say it does) unless you _prove_ it by giving us instructions on how to use your set of 18 algs to solve every last layer case.

Maybe others on this forum have other ideas about how to abstractly represent a 3x3x3 last layer case, but this is mine.


Spoiler



You can post all last layer cases in a form like the following:

C{1,2,3,4}|E{1,2,3,4}

, where C stands for "corners" and E stands for "Edges". Pick a convention in which to number a cube (here's my 3x3x3 numbered cube, for example--note that I gave credit to the creators of CubeTwister for this image, despite I made it, just to let you know that I'm not them). I assume you already treated the cube as a numbered cube anyway in order to program this software.

In addition, for corner scrambles, let 1+ represent a corner twisted 90 degrees from being oriented and let 1- represent a corner being twisted -90 degrees, for example. In addition, let 1+ represent a middle edge unoriented. (If no + or - is to the right of a number in a list, we assume it is correctly oriented).

For example, the last layer case generated by the first algorithm in your list of 18 generating algorithms can be represented as:
C{1+,2,4,3-}|E{1,4+,3,2+} on my 3x3x3 numbered cube.


You do not have to post these in a post directly. Put them all in a txt file and attach it to a post, or if it is too large to be put on your share of the forums attachment storage space, then upload it to an external file hosting site you trust (or your own website) and then provide us with a link.


----------



## Kirjava (Jun 18, 2014)

Tempus said:


> *UPDATE:* After interminable processing, I have managed to find a single case where 18 algorithms (and their mirrors, inverses, and mirror-inverses) are sufficient to solve any last-layer position in two looks.



Awesome! 

I have another request; would it be much bother for me to suggest alternative lists to see if I can find a better subset that includes a group of forced algs? My intention is to minimise arbitrary algs. I don't know how much effort/processing time this requires on your part.



cmowla said:


> However, we have no reason to believe that this is true (I personally believe your set does do what you say it does) unless you _prove_ it by giving us instructions on how to use your set of 18 algs to solve every last layer case.



I should be able to prove/disprove it very soon.


----------



## qqwref (Jun 19, 2014)

Tempus said:


> UPDATE: After interminable processing, I have managed to find a single case where 18 algorithms (and their mirrors, inverses, and mirror-inverses) are sufficient to solve any last-layer position in two looks.


Very cool! Here are some optimal solutions (not counting AUF). Any of these can be rotated by any y rotation to make them nicer. No alg requires more than 14 moves.



Spoiler




F' U' F R2 B' R' U B U R' B' U2 B / F U' F' L' U R' F2 L F L' F L R / F U' F' L' B' R2 F R F' R B U L (13f*)
F R U' R' F D B' R B D' F2 (11f*)
B L' F' L2 B F' R' D2 R B2 F2 (11f*)
F2 L2 F' R2 F L2 F' R2 F' (9f*)
L' R U B U' B' R' U' L / L' U' B U L U' L' B' L (9f*)
F' L F U2 L' U' L U' L2 U2 L (11f*)
F R U R' F2 L' U' L F2 U' F2 U2 F / R' D' F L' U2 L2 F L2 U2 L F' D R / R' U' R F U R2 U' F' R U R' U' R' (13f*)
B' U2 B R B2 R' U2 F R B2 R B' R2 F' / L U B L' B' F' L' F D' U' L U2 L' D / D L U B L' B' F' L' F D' U' L U2 L' / L U B L' F' L' F L B' L' U' L U2 L' / F U R U' R' F D' B L2 B' D F U2 F (14f*)
B' U2 B2 L' B' L' B2 F' L' F L B2 L2 / B' R' U' R' D' R2 U R D R' B2 U2 B' (13f*)
B U2 R U' L' U R' U' L2 U' L' B' (12f*)
B' U2 B U B' R U B U' B' R' B (12f*)
B' U L' U B2 U B2 U' B2 L U2 B / B' U L' D L2 U L2 D' B2 L U2 B (12f*)
F' U2 F2 U F2 L D' L D L U L F (13f*)
B U2 F' L F L' B' L' B2 L' B2 L2 / B' R' F R' B2 U2 B' F2 U2 F' R2 F2 / B' R' F R' B' F' L2 B R2 B' L2 B2 (12f*)
F U' R U L U2 L' R' U2 B' U' F' U2 B / L2 F2 R' F' R F' L2 B2 D' F R2 F' D B2 (14f*)
B2 R F R B2 R B2 R B2 R2 B' F' R' B / B2 R F' R' D2 R' D2 R F2 D2 B F' R B / B2 R F' L' F2 R' F2 L F2 D2 B F' R B (14f*)
R U2 B F R2 B' R' B R' B' F' U2 R' / R U2 B2 U2 F' L' B L' B2 L2 B' F R' / F' U' F R2 B2 L' B' L B' R2 F' U F / F' D' L F2 R2 B' R' B R' F2 L' D F / L F L' R2 B' R' B L R2 F R F2 L' / L U2 L' U' L U' L' B L U L' U' B' (13f*)
R2 D' F' D' F2 D R U2 F U R2 D R' / B' L U L' U' L' B U2 F' L' F U2 L / F U' B2 U R' D B2 D' B2 R F' U' B2 / F2 D' F U2 F' D F2 U R B U B' R' / B' U B U' L F R U R' U2 F' U L' / B' F' L' F R' U F' U' F' L F2 R B (13f*)


----------



## Kirjava (Jun 19, 2014)

Kirjava said:


> I should be able to prove/disprove it very soon.



Managed to confirm it's true. More fun stuff coming soon.


----------



## Tempus (Jun 20, 2014)

cmowla said:


> If you have others reasons as to why you do not want to post these instructions with your alg set (perhaps you want to publish this result in a journal?) then I understand completely.


Wait, there's a journal!?  Please point me toward the nearest journal. 



cmowla said:


> However, we have no reason to believe that this is true (I personally believe your set does do what you say it does) unless you _prove_ it by giving us instructions on how to use your set of 18 algs to solve every last layer case.


Okay. I spent the day writing an instruction generator, and it produced a rather large text file.



cmowla said:


> Maybe others on this forum have other ideas about how to abstractly represent a 3x3x3 last layer case, but this is mine.


That's okay, I found my own way in the interim.



cmowla said:


> You do not have to post these in a post directly. Put them all in a txt file and attach it to a post, or if it is too large to be put on your share of the forums attachment storage space, then upload it to an external file hosting site you trust (or your own website) and then provide us with a link.


"Attachment space"? This is a new term to me. I have been on other forums, but have never seen one with a dedicated attachment storage area before. Intriguing. Having looked into the matter, I believe that, with sufficient compression, I should be able to fit this document just within the file size limits for a .zip file attachment. If all goes well, it will be attached to this message at the end.




Kirjava said:


> Awesome!
> 
> I have another request; would it be much bother for me to suggest alternative lists to see if I can find a better subset that includes a group of forced algs? My intention is to minimise arbitrary algs. I don't know how much effort/processing time this requires on your part.


Fire away, just don't hit me with a dozen lists all at once. It'll probably take a couple days of processing to get a semi-reliable result for one. Alternatively, if you'd like, I could take more than one list and just calculate the redundancy scores for each, as mentioned previously, and just process whichever one scores the lowest. It doesn't take long to calculate the redundancy score.



Kirjava said:


> I should be able to prove/disprove it very soon.


Oh, ye of little faith! 




qqwref said:


> Very cool! Here are some optimal solutions (not counting AUF). Any of these can be rotated by any y rotation to make them nicer. No alg requires more than 14 moves.
> 
> 
> 
> ...


I gather you mean that you refactored my set of 18 algorithms such that they are permitted to use D, L, and B turns in addition to the U, R, and F turns to which I restricted mine?




Kirjava said:


> Managed to confirm it's true. More fun stuff coming soon.


I can't imagine why anyone ever doubted me, what with my lengthy track record on this forum going back decades and all.

Oh, wait...no it doesn't. Resume doubting. 

Okay, everybody, here's what you all wanted. I present you with *a set of instructions for how to use the set of 18 algorithms to do 2-Look Last Layer:*
View attachment SmallestMISetWithInstructions.zip
You'll notice that I removed the superfluous U turns from the beginnings and ends of algorithms, and I sorted the algorithm list by number of turns, shortest first. I tried doing a couple LL solves with it, and I've gotta say...it's a lot slower than my usual method.


----------



## IRNjuggle28 (Jun 20, 2014)

Is it thought that 18 is the lowest number it can be done with?


----------



## Kirjava (Jun 20, 2014)

I made something cool - 2LL and other stuff



Tempus said:


> Fire away, just don't hit me with a dozen lists all at once. It'll probably take a couple days of processing to get a semi-reliable result for one. Alternatively, if you'd like, I could take more than one list and just calculate the redundancy scores for each, as mentioned previously, and just process whichever one scores the lowest. It doesn't take long to calculate the redundancy score.



... are the scores generated easily? Could I give you a list of 30 algs and have you try every permutation of 18?



Tempus said:


> I can't imagine why anyone ever doubted me, what with my lengthy track record on this forum going back decades and all.



I actually didn't, possibly foolishly - but cmowla asked for proof and I realised I could check.


----------



## Renslay (Jun 20, 2014)

Kirjava said:


> I made something cool - 2LL and other stuff



That is actually really cool! Thanks!


----------



## Tempus (Jun 21, 2014)

IRNjuggle28 said:


> Is it thought that 18 is the lowest number it can be done with?


I tend to believe it is, but I've been wrong before. The problem is that upper bounds for the number are determined by examples, and lower bounds are determined by theory, and until recently both were rather simplistic in nature. I have improved the upper bound with a better example, i.e. a set of 18, but improving the lower bound would entail improving theory, and I do not know off the top of my head how to do that. Theory, as it stands, can only say that the number is at least 8, so all we can _prove_ is that it's somewhere from 8 to 18, inclusive. I can't say for sure that it's 18, but I am certain it's closer to 18 than it is to 8.




Kirjava said:


> ... are the scores generated easily? Could I give you a list of 30 algs and have you try every permutation of 18?


I was going to say that from a list of 30 there are 86,493,225 possible subsets of 18, and that even if my scoring function took a fraction of a second to run, it would take forever, but I have had an insight and have added a new feature to my program for handling this. What I have done is to make it so that if the forced algorithm list is shorter than or equal to the desired number of algorithms, my program will act as it did before, but if the forced algorithm list is _longer_ than the desired number of algorithms, it will attempt to generate a subset of that list that is the desired length and which has maximal coverage of the 62208 possible cases. This should make it much faster and easier to do what you want. It should reduce the number of cases that rely on generated algorithms as much as possible.

So, shoot your list of 30 (or more) algorithms my way, and I'll see what my latest creation can make of it. All I ask is that they are expressed in a stationary-core format so that I don't have to hand-translate them.



Kirjava said:


> I actually didn't, possibly foolishly - but cmowla asked for proof and I realised I could check.


Perhaps you're just a good judge of character?


----------



## Christopher Mowla (Jun 21, 2014)

Tempus,

I never doubted you. I asked for proof for the cubing community as a whole because most of us do not have a program handy which can verify that that list is full last layer generating set: but that's as far as things go. Despite your efforts so far in this thread, none of us should have been immediately convinced that your list did what you said it did, as there are many ways to err with any involved calculation. Thom probably does have a good judge of character, but I hope you were joking with that statement (no matter how good someone's character is, they still are able to make mistakes).

Anyway, great finding, and welcome to the forums.


----------



## Tempus (Jun 21, 2014)

cmowla said:


> Tempus,
> 
> I never doubted you. I asked for proof for the cubing community as a whole because most of us do not have a program handy which can verify that that list is full last layer generating set: but that's as far as things go. Despite your efforts so far in this thread, none of us should have been immediately convinced that your list did what you said it did, as there are many ways to err with any involved calculation. Thom probably does have a good judge of character, but I hope you were joking with that statement (no matter how good someone's character is, they still are able to make mistakes).


I'm sorry if something I said left you with the impression that I was offended, but I was just being jocose.

Honestly, it never even occurred to me that people might be worried about the possibility of errors. I thought that people were concerned about hoaxes or trolling or something because 18 seemed impossibly low. Perhaps I misunderstood. Either way, it's all good, dude. :tu



cmowla said:


> Anyway, great finding, and welcome to the forums.


Thanks. It's an interesting place.


----------



## Kirjava (Jun 21, 2014)

Tempus said:


> So, shoot your list of 30 (or more) algorithms my way, and I'll see what my latest creation can make of it. All I ask is that they are expressed in a stationary-core format so that I don't have to hand-translate them.



Excellent, that saves so much time trying to work out lists with algs that are good but multifunctional enough to cover stuff.

Here's my list in a tasty format: 



Spoiler



R U R' U R U2 R'
R U R' U R U' R' U R U2 R'
R U R' U R U' R' U R U' R' U R U2 R'
R U R' U' R' F R2 U' R' U' R U R' F'
R U R2 F R F2 U F
R U R2 F R F2 U F
F U R U' R' F' R' F' U' F U R
R' U2 R2 U R2 U R U' R U' R'
L' U2 L U2 L F' L' F
F R' F' R U2 R U2 R'
F R U R' U' F'
F R U R' U' R U R' U' F'
L' R' D2 R U2 R' D2 R U2 L
R U' L' U R' U' L
L F R' F R F2 L'
L F' L' U' L F L' F' U F 
R U' R' U' F' U2 F U2 R U2 R'
R B' R F2 R' B R F2 R2
R2 L2 D' R L' F2 R' L D' R2 L2
R2 L2 D R2 L2 U2 R2 L2 D R2 L2
R U R' U' R' F R F'
L F R' F' L' F R F' 
R' F R U R' U' F' U R
R' U' R' F R F' U R
R2 U' R F R' U R2 U' R' F' R
F R U R' U' R U' R' U' R U R' F'
F R U' R D R' U2 R D' R2 U' F'
R U R' U F' L' U L F U' R U' R'
L F R' F R F L' F R' F' L F L'R
F' R D2 R' F U2 F' R D2 R' F U2
R2 D R' U2 R D' R' U2 R'
R' F' R U R' U' R' F R U R
R' F' R U R' U' R' F R2 U' R' U2 R
F R2 D R' U R D' R2 U' F'
F' L F L' U' L' U L
R U' L' U R' U L U L' U L
R' U' R U' R' U F' U F R
R' U' R' F R F' U R
R' U' R' F R F' R U' R' U2 R
F R U' R' U' L' U' L U L F' L2 U L
R U R' U' L R' F R F' L' 
R' L F R L' U2 R' L F R L'
R U2 R2 U' R2 U' R2 U2 R
L' U R U' L U2 R' U R U2 R'
R' F R2 B' R' F' R2 B R'
R U2 R' U2 R' F R2 U R' U' F'
F R U' R' U2 R U R' F'
R U' L' U R' U L U L' U2 R U' L U R'
F R U' R' U' R U2 R' U' F'
R' U' R U R' F' R U R' U' R' F R2
L F L' R U R' U' L F' L'
R U R' U R U' R' U' R' F R F'
F U R U2 R' U' R U R' F'
F R U R' U' R U' R' U R U R' F'


I haven't checked for isomorphic cases.


----------



## IRNjuggle28 (Jun 22, 2014)

Tempus said:


> I'm not talking about the difficulty of learning it, but of speedy recognition. I figure that the speed of a solve relies on three things: speed of turning, speed of recognition, and efficiency of the solution system in reducing the number of turns. Some systems are very efficient in terms of turn count, but very slow in terms of recognition. Some are super-fast in terms of recognition, but require significantly more turns. The goal is a system that both reduces turn count _AND_ aids rapid recognition, and I suspect that these efforts to reduce the number of algorithms actually work against that goal, slowing recognition and increasing turn count.


Posted by Kirjava:


> This system would be different from LL systems with large algorithm subsets. I'm inclined to think that the problem they have is algorithm recall, not recognition. The difficulty lies in remembering an alg from a pool of 400.


Yes, that's the problem with stuff like full ZBLL, but not with the method that is currently fast: OLL and PLL. Is there a reason to think that the opposite extreme of ZBLL is what will be fast? It seems like the optimal last layer method would be the maximum number of algs that can be memorized well enough to execute quickly, not simply the smallest number of algs possible. Your explanation for why this could be a viable speedsolving method seemed like it amounted to "memorizing hundreds of algs is impractical, so making a method with 18 will fix that problem," and completely ignored the fact that a method with 78 algs was perfectly viable. 


> Method development is reaching a point where you need to push into using more abstract concepts and trying ambitious weird things. Step concatenation and other standard structures have maxed out their usefulness and we need to do something new and different to improve on what we already have. I believe if I can circumvent problems with techniques like this (in this case with clever case sorting) they can prove to be a viable alternative.


The only ambitious thing about this method is the very low alg count, and that's not the trait that needs to be ambitious to be a viable speedsolving method. Your "clever case sorting" and recognition tricks, at the very best, will succeed in making this method only a bit worse than OLL/PLL in terms of recognition. Correct me if I'm wrong, but I don't know how a method like this could be adapted to be as easy as recognizing orientation and then recognizing permutation. In terms of speedsolving, having an alg count this low really only helps with learning the method. It doesn't help with execution of it in a speedsolve. 

Best case scenaro for this method seems to be the method becoming a decent intermediate last layer method for those unwilling to learn as many algs as OLL and PLL is. And that's it. Not a groundbreaking speedsolving method.

I do not mean any of this as a putdown of the work you and Tempus have put in. You've done astounding method development work before and I have no doubt you can do it again. I just don't see how it will work yet. I think Tempus said it well here:


> Don't get me wrong. I'm still interested from a theory standpoint, but I don't believe the result could ever be practical for me. I have trouble just finding the next F2L pair.


----------



## Tempus (Jun 22, 2014)

Kirjava said:


> Excellent, that saves so much time trying to work out lists with algs that are good but multifunctional enough to cover stuff.
> 
> Here's my list in a tasty format:
> 
> ...


Okay, I've tried running this list of 54 through my program, and here's some preliminary information:

#29 is missing a space, so I added one. Hardly worth mentioning, but enough to make my program fail an assertion. (It's exceedingly picky.)
#45 is broken, as it disrupts F2L. I figured this was likely caused by a typo, so I tried tweaking it various ways and found that if I change the middle turn from R' to R2 it will no longer disrupt F2L, so I'm assuming that you meant to say R' F R2 B' R2 F' R2 B R'. Let me know if this is incorrect.
#5 is identical to #6, so I'm removing #6.
#24 is identical to #38, so I'm removing #38.
#22 is equivalent to #32, but #22 is three moves shorter, so I'm removing #32.
#9 is a mirror-inverse of #10, but #10 doesn't involve L turns, so I'm removing #9.
#21 is a mirror-inverse of #35, but #21 doesn't involve L turns, so I'm removing #35.
#28 is a mirror-inverse of #40, but #28 is one move shorter, so I'm removing #40.
(To see the truth of some of the above information, you need to consider AUF.)



Spoiler: Here, then, is the redacted list of 48




R U R' U R U2 R'
R U R' U R U' R' U R U2 R'
R U R' U R U' R' U R U' R' U R U2 R'
R U R' U' R' F R2 U' R' U' R U R' F'
R U R2 F R F2 U F
F U R U' R' F' R' F' U' F U R
R' U2 R2 U R2 U R U' R U' R'
F R' F' R U2 R U2 R'
F R U R' U' F'
F R U R' U' R U R' U' F'
L' R' D2 R U2 R' D2 R U2 L
R U' L' U R' U' L
L F R' F R F2 L'
L F' L' U' L F L' F' U F
R U' R' U' F' U2 F U2 R U2 R'
R B' R F2 R' B R F2 R2
R2 L2 D' R L' F2 R' L D' R2 L2
R2 L2 D R2 L2 U2 R2 L2 D R2 L2
R U R' U' R' F R F'
L F R' F' L' F R F'
R' F R U R' U' F' U R
R' U' R' F R F' U R
R2 U' R F R' U R2 U' R' F' R
F R U R' U' R U' R' U' R U R' F'
F R U' R D R' U2 R D' R2 U' F'
R U R' U F' L' U L F U' R U' R'
L F R' F R F L' F R' F' L F L' R
F' R D2 R' F U2 F' R D2 R' F U2
R2 D R' U2 R D' R' U2 R'
R' F' R U R' U' R' F R2 U' R' U2 R
F R2 D R' U R D' R2 U' F'
R U' L' U R' U L U L' U L
R' U' R U' R' U F' U F R
R' U' R' F R F' R U' R' U2 R
R U R' U' L R' F R F' L'
R' L F R L' U2 R' L F R L'
R U2 R2 U' R2 U' R2 U2 R
L' U R U' L U2 R' U R U2 R'
R' F R2 B' R2 F' R2 B R'
R U2 R' U2 R' F R2 U R' U' F'
F R U' R' U2 R U R' F'
R U' L' U R' U L U L' U2 R U' L U R'
F R U' R' U' R U2 R' U' F'
R' U' R U R' F' R U R' U' R' F R2
L F L' R U R' U' L F' L'
R U R' U R U' R' U' R' F R F'
F U R U2 R' U' R U R' F'
F R U R' U' R U' R' U R U R' F'



(From here on in, I will be using the algorithm numbers of this list instead of the numbers of the original list.)

This list, taken in its entirety, is sufficient to cover all but 56 of the 62208 possible LL states in two looks. The problem I have now is that your precise goal is unclear to me. You have stated that you want to minimize the number of generated algorithms, but surely you must also want to reduce the number of algorithms overall, or you would simply leave all 48 in place. Until you precisely clarify your intent and define how you wish to balance these two conflicting priorities, here are the results of many hours of calculations to tide you over:


*Size of Subset:**Minimum Coverage Gap:**Maximum Coverage (Percentage):*.........1789698.56%1868098.91%1952499.16%2039299.37%2132899.47%2226499.58%2320099.68%2413699.78%2511299.82%268899.86%277299.88%28 or more5699.91%

An interesting fact to note is that you can remove 20 of the 48 algorithms without reducing the coverage.


----------



## Kirjava (Jun 22, 2014)

IRNjuggle28 said:


> Posted by Kirjava:



You're posting in the wrong thread and doing so has confused you.

Please understand that the minimum number of algs needed to solve the last layer and my system are two different methods.



IRNjuggle28 said:


> Yes, that's the problem with stuff like full ZBLL, but not with the method that is currently fast: OLL and PLL. Is there a reason to think that the opposite extreme of ZBLL is what will be fast? It seems like the optimal last layer method would be the maximum number of algs that can be memorized well enough to execute quickly, not simply the smallest number of algs possible. Your explanation for why this could be a viable speedsolving method seemed like it amounted to "memorizing hundreds of algs is impractical, so making a method with 18 will fix that problem," and completely ignored the fact that a method with 78 algs was perfectly viable.



You've come to the wrong conclusion, my thread was posted 2 years ago before any of this happened - that can't possibly be my line of thinking. My original preposition had 61 algs. I've recently said in the thread that total alg count isn't important.

I'm not interested in the low alg count. I'm interested in the complete case coverage.



IRNjuggle28 said:


> The only ambitious thing about this method is the very low alg count, and that's not the trait that needs to be ambitious to be a viable speedsolving method. Your "clever case sorting" and recognition tricks, at the very best, will succeed in making this method only a bit worse than OLL/PLL in terms of recognition. Correct me if I'm wrong, but I don't know how a method like this could be adapted to be as easy as recognizing orientation and then recognizing permutation. In terms of speedsolving, having an alg count this low really only helps with learning the method. It doesn't help with execution of it in a speedsolve.



I've already posted my thoughts on people's overestimation of recognition.



IRNjuggle28 said:


> Best case scenaro for this method seems to be the method becoming a decent intermediate last layer method for those unwilling to learn as many algs as OLL and PLL is. And that's it. Not a groundbreaking speedsolving method.



No, that's an awful conclusion - my system will be harder to learn than OLL/PLL. If you really think that recognition of all things is what makes this unusable we don't have anything further to discuss.



Tempus said:


> Okay, I've tried running this list of 54 through my program, and here's some preliminary information:



Thanks for the fixes, as I said, I hadn't checked for mirrors/inverses.



Tempus said:


> This list, taken in its entirety, is sufficient to cover all but 56 of the 62208 possible LL states in two looks. The problem I have now is that your precise goal is unclear to me. You have stated that you want to minimize the number of generated algorithms, but surely you must also want to reduce the number of algorithms overall, or you would simply leave all 48 in place.



Minimising the number of generated algorithms is more important than minimising the total count. 

Having to use 48 algs instead of 18 is nothing compared to learning which cases go with each alg. People already know all these algorithms anyway. I would have included more but I only attempted to include very good cases.

I think the best situation for my needs is to ensure a reduction in generated algs by keeping everything.



Tempus said:


> An interesting fact to note is that you can remove 20 of the 48 algorithms without reducing the coverage.



If you can remove 20 of the algorithms and produce the same results, removing 20 is desirable. However, will this have an effect on the extra algs required? I believe it would do so (if minimally)? If so, leave them in. If not, remove them (I can probably select which would be better to remove (I assume only removing certain ones would have this effect)).

You make the best edit reasons.


----------



## 10461394944000 (Jun 22, 2014)

Tempus said:


> 62208 possible LL states



um, there are only ~1200 LL states. if you are counting normal/inverse/mirror/mirrorinverse as 1 case then shouldnt there only be about 300-400?


----------



## Kirjava (Jun 22, 2014)

10461394944000 said:


> um, there are only ~1200 LL states. if you are counting normal/inverse/mirror/mirrorinverse as 1 case then shouldnt there only be about 300-400?



1212 ignoring isomorphisms. 62208 distinct cases. (4! 4! 3^4 2^4 /2 /3 /2)


----------



## stoic (Jun 22, 2014)

Tempus said:


> This list, taken in its entirety, is sufficient to cover all but 56 of the 62208 possible LL states in two looks.


Not sure if it's easy for you to check, but what's the chance that these 56 all have much in common?
Eg if they were all dot OLLs they'd be easy to avoid or whatever...


----------



## irontwig (Jun 22, 2014)

So basically 1/1000 you get a bad case (solvable in 3 algs perhaps?). I would say that's good enough from a pragmatic speedsolving perspective rather than a theoretical mathematical one. After all some cases are just bad.


----------



## Kirjava (Jun 22, 2014)

irontwig said:


> So basically 1/1000 you get a bad case (solvable in 3 algs perhaps?). I would say that's good enough from a pragmatic speedsolving perspective rather than a theoretical mathematical one. After all some cases are just bad.



Or you learn 1LLL for those cases.


----------



## 10461394944000 (Jun 22, 2014)

Kirjava said:


> 1212 ignoring isomorphisms. 62208 distinct cases. (4! 4! 3^4 2^4 /2 /3 /2)



o ok I didn't know the ~1200 cases was with mirrors and stuff removed.


----------



## Tempus (Jun 23, 2014)

Kirjava said:


> Minimising the number of generated algorithms is more important than minimising the total count.
> 
> Having to use 48 algs instead of 18 is nothing compared to learning which cases go with each alg. People already know all these algorithms anyway. I would have included more but I only attempted to include very good cases.
> 
> I think the best situation for my needs is to ensure a reduction in generated algs by keeping everything.


Well, if I keep all 48, my program says an additional two generated algorithms is sufficient for 2LLL. If I restrict it to one, it always has a coverage gap of last-layer states.



Kirjava said:


> If you can remove 20 of the algorithms and produce the same results, removing 20 is desirable. However, will this have an effect on the extra algs required? I believe it would do so (if minimally)? If so, leave them in. If not, remove them (I can probably select which would be better to remove (I assume only removing certain ones would have this effect)).


Well, I wrote some crude algorithm length awareness into my program, so that it will use a length score (equal to the total algorithm length plus the maximum algorithm length) as a tiebreaker. Using this, there appears to be only one optimally short (length score=303) 28-algorithm subset that still keeps the coverage gap at 56 last-layer states.


Spoiler: Here it is



1. R U R' U R U2 R'
2. R U R' U R U' R' U R U2 R'
5. R U R2 F R F2 U F
8. F R' F' R U2 R U2 R'
10. F R U R' U' R U R' U' F'
13. L F R' F R F2 L'
14. L F' L' U' L F L' F' U F
16. R B' R F2 R' B R F2 R2
19. R U R' U' R' F R F'
22. R' U' R' F R F' U R
25. F R U' R D R' U2 R D' R2 U' F'
26. R U R' U F' L' U L F U' R U' R'
27. L F R' F R F L' F R' F' L F L' R
29. R2 D R' U2 R D' R' U2 R'
30. R' F' R U R' U' R' F R2 U' R' U2 R
31. F R2 D R' U R D' R2 U' F'
32. R U' L' U R' U L U L' U L
33. R' U' R U' R' U F' U F R
34. R' U' R' F R F' R U' R' U2 R
35. R U R' U' L R' F R F' L'
36. R' L F R L' U2 R' L F R L'
40. R U2 R' U2 R' F R2 U R' U' F'
43. F R U' R' U' R U2 R' U' F'
44. R' U' R U R' F' R U R' U' R' F R2
45. L F L' R U R' U' L F' L'
46. R U R' U R U' R' U' R' F R F'
47. F U R U2 R' U' R U R' F'
48. F R U R' U' R U' R' U R U R' F'


Now, since one generated algorithm was insufficient to do 2LLL with the whole set of 48, we know that at least 2 will be required here as well, and 2 does indeed prove sufficient. This means that, at least in this case, removing the other 20 had no practical effect.

Here are a half-dozen pairs of generated algorithms, each of which has a length score of 35, and each of which is sufficient, if added to the above set of 28, to form a 30-algorithm 2LLL set. You can choose which pair you want to use based on your own sense of whimsy. 


F R U R2 U2 R2 U R2 U R F' (11f)F R' U' R2 F R' F' R2 U R U F' (12f)R' F U F2 U F2 U2 F2 U F R (11f)R' U F U F2 R' F' R F2 U' F' R (12f)R' F' U' F2 U2 F2 U' F2 U' F' R (11f)R' F' R U2 R' U' R U' R' U2 F R (12f)R' F U F2 U F2 U2 F2 U F R (11f)R' F' R U2 R' U' R U' R' U2 F R (12f)R' F U F2 U F2 U2 F2 U F R (11f)F U' R' U' R2 F R F' R2 U R F' (12f)R' F U F2 U F2 U2 F2 U F R (11f)R' F U F2 R' F R F2 U' F' U' R (12f)

Next, I plan to try to figure out what the smallest subset that can be completed using just 2 generated algorithms is, but this may or may not be useful to you. Right now, using 28+2, only 56 cases require using one of the 2. If I manage to make a 27+2 set that works, it will definitely mean that more than 56 LL states will require using a generated algorithm. So, I have to ask you, is the goal to minimize the frequency of solves that require the use of generated algorithms, or is the goal to minimize the number of generated algorithms that must be initially memorized?



Kirjava said:


> You make the best edit reasons.


Thank you. I wasn't sure anyone had noticed. 




10461394944000 said:


> um, there are only ~1200 LL states. if you are counting normal/inverse/mirror/mirrorinverse as 1 case then shouldnt there only be about 300-400?


Here's how to mentally count the number of last-layer states:

*Axiom #1:* Parity says that if you know the exact state of seven corners, you can determine the state of the eighth.
*Axiom #2:* For any given corner configuration, the same holds true for the edges.
*Axiom #3:* Whether the number of edge swaps is odd or even is determined by the corner configuration.
Now, imagine a cube that has all of the stickers peeled off of one last-layer edge and one last-layer corner.
Given Axioms #1 and #2, the number of last-layer states for this hypothetical cube is the same as for a normal cube.
There are 4!=24 permutations for the last-layer corners.
There are 3^3=27 orientations for the three last-layer corners that still have their stickers.
For any given corner configuration, there are 4!/2=12 possible edge permutations. This is because half of them are excluded by Axiom #3
There are 2^3=8 possible orientations for the three last-layer edges that still have their stickers.
Multiply these all together and you get 24*27*12*8=62,208 possible last-layer states.



ellwd said:


> Not sure if it's easy for you to check, but what's the chance that these 56 all have much in common?
> Eg if they were all dot OLLs they'd be easy to avoid or whatever...


Only 16 of the 56 uncovered last-layer states are "dot" states. It's pleasant to think that they would all have something in common, but it doesn't really make sense to expect it, as what you have left are the corner cases left over after an artificial intelligence has aimed for maximal coverage using minimal resources.

Imagine it this way: If you had a white car and were told to color it red instead using just a stack of large circular red stickers that is slightly too small to cover every square inch of the white paint, would you expect all of the white bits still showing when you were done to have something significant in common?




irontwig said:


> So basically 1/1000 you get a bad case (solvable in 3 algs perhaps?). I would say that's good enough from a pragmatic speedsolving perspective rather than a theoretical mathematical one. After all some cases are just bad.


'Tis the Puzzle _*Theory*_ sub-forum. Little by way of pragmatism dwells within _these_ hallowed halls.


----------



## stoic (Jun 23, 2014)

Tempus said:


> Only 16 of the 56 uncovered last-layer states are "dot" states. It's pleasant to think that they would all have something in common, but it doesn't really make sense to expect it, as what you have left are the corner cases left over after an artificial intelligence has aimed for maximal coverage using minimal resources.
> 
> Imagine it this way: If you had a white car and were told to color it red instead using just a stack of large circular red stickers that is slightly too small to cover every square inch of the white paint, would you expect all of the white bits still showing when you were done to have something significant in common?


Thanks for the analogy, nicely done. 
I didn't really expect it, but it seemed rude not to ask.


----------



## Kirjava (Jun 23, 2014)

Tempus said:


> Well, if I keep all 48, my program says an additional two generated algorithms is sufficient for 2LLL. If I restrict it to one, it always has a coverage gap of last-layer states.



2 is great, the best thing is that the algs aren't even that bad.



Tempus said:


> Well, I wrote some crude algorithm length awareness into my program, so that it will use a length score (equal to the total algorithm length plus the maximum algorithm length) as a tiebreaker. Using this, there appears to be only one optimally short (length score=303) 28-algorithm subset that still keeps the coverage gap at 56 last-layer states.



This helps what I'm trying to do a _lot_ and I didn't even ask you for it. So thoughtful <3



Tempus said:


> Now, since one generated algorithm was insufficient to do 2LLL with the whole set of 48, we know that at least 2 will be required here as well, and 2 does indeed prove sufficient. This means that, at least in this case, removing the other 20 had no practical effect.



30 algs it is then!



Tempus said:


> Next, I plan to try to figure out what the smallest subset that can be completed using just 2 generated algorithms is, but this may or may not be useful to you. Right now, using 28+2, only 56 cases require using one of the 2. If I manage to make a 27+2 set that works, it will definitely mean that more than 56 LL states will require using a generated algorithm. So, I have to ask you, is the goal to minimize the frequency of solves that require the use of generated algorithms, or is the goal to minimize the number of generated algorithms that must be initially memorized?



Initial algorithm memorisation quantity isn't an issue. I'd rather reduce the number of cases using a generated algorithm (though they aren't that bad). 

This is pretty much beyond what I asked for, and it's going to allow me to complete my system. Thank you so much! I'll mention you in my nobel prize acceptance speech.


----------



## Tempus (Jun 25, 2014)

Kirjava said:


> 2 is great, the best thing is that the algs aren't even that bad.


That's the crude algorithm length awareness at work. When there are thousands of options to choose from, there is wiggle room, and it can make a big difference, but when the number of options is more limited, it has little effect, as it's just used for tie-breaking.



Kirjava said:


> This helps what I'm trying to do a _lot_ and I didn't even ask you for it. So thoughtful <3


Glad to be of service. 



Kirjava said:


> 30 algs it is then!


Over the last couple days, and to satisfy my own curiosity, I've been seeing how far I could push it while still keeping the generated algorithm count at 2. Just for reference, the probability of having to use one of the generated algorithms in your current 28+2 set is 1 in 1,110.857, or about 0.090% Here are the results:


Spoiler: 27+2



Given this list of 27:

1. R U R' U R U2 R'
2. R U R' U R U' R' U R U2 R'
5. R U R2 F R F2 U F
8. F R' F' R U2 R U2 R'
10. F R U R' U' R U R' U' F'
19. R U R' U' R' F R F'
20. L F R' F' L' F R F'
21. R' F R U R' U' F' U R
22. R' U' R' F R F' U R
23. R2 U' R F R' U R2 U' R' F' R
25. F R U' R D R' U2 R D' R2 U' F'
26. R U R' U F' L' U L F U' R U' R'
27. L F R' F R F L' F R' F' L F L' R
30. R' F' R U R' U' R' F R2 U' R' U2 R
31. F R2 D R' U R D' R2 U' F'
32. R U' L' U R' U L U L' U L
33. R' U' R U' R' U F' U F R
35. R U R' U' L R' F R F' L'
36. R' L F R L' U2 R' L F R L'
40. R U2 R' U2 R' F R2 U R' U' F'
41. F R U' R' U2 R U R' F'
43. F R U' R' U' R U2 R' U' F'
44. R' U' R U R' F' R U R' U' R' F R2
45. L F L' R U R' U' L F' L'
46. R U R' U R U' R' U' R' F R F'
47. F U R U2 R' U' R U R' F'
48. F R U R' U' R U' R' U R U R' F'
...Here are some generated pairs that each complete it:

R U2 R' U R' F' U' F U R2 U R' (12f)R2 U' F U R' U' F' U R' U' F R' F' (13f)R U2 R' U' R U' R2 F' U' F U R (12f)F2 U R' U' F U R U' F U R' F R (13f)R U' R2 U' F' U F R U' R U2 R' (12f)R2 U' F U R' U' F' U R' U' F R' F' (13f)F' U2 F U' F R U R' U' F2 U' F (12f)F' U' F2 U F2 U F2 U2 F' U F' U F (13f)U2 F U2 R' F' U' F U R2 U2 R' F' (12f)R2 F2 R U2 R U R2 F2 R2 U R' F2 R (13f)F U F' R' F2 U F U' F' U' F' R (12f)U R' U' F' U F R2 U R' U R U2 R' (13f)

The probability of having to use one of the two generated algorithms is 1 in 864, or about 0.116%





Spoiler: 26+2



Given this list of 26:

1. R U R' U R U2 R'
2. R U R' U R U' R' U R U2 R'
5. R U R2 F R F2 U F
8. F R' F' R U2 R U2 R'
10. F R U R' U' R U R' U' F'
19. R U R' U' R' F R F'
20. L F R' F' L' F R F'
21. R' F R U R' U' F' U R
22. R' U' R' F R F' U R
23. R2 U' R F R' U R2 U' R' F' R
25. F R U' R D R' U2 R D' R2 U' F'
26. R U R' U F' L' U L F U' R U' R'
27. L F R' F R F L' F R' F' L F L' R
30. R' F' R U R' U' R' F R2 U' R' U2 R
31. F R2 D R' U R D' R2 U' F'
32. R U' L' U R' U L U L' U L
35. R U R' U' L R' F R F' L'
36. R' L F R L' U2 R' L F R L'
40. R U2 R' U2 R' F R2 U R' U' F'
41. F R U' R' U2 R U R' F'
43. F R U' R' U' R U2 R' U' F'
44. R' U' R U R' F' R U R' U' R' F R2
45. L F L' R U R' U' L F' L'
46. R U R' U R U' R' U' R' F R F'
47. F U R U2 R' U' R U R' F'
48. F R U R' U' R U' R' U R U R' F'

...Here are some generated pairs that each complete it:

F R' U' R2 U' R2 U2 R U' F' (10f)F U F' U F U2 R U' R' U R U R' F' (14f)R U' R2 U' F' U F R U' R U2 R' (12f)F' U2 F U2 F R' F' U2 R U R' U R (13f)F' U F2 U R U' R' F' U F' U2 F (12f)F R U R' U' R F U F' R' F U' F2 (13f)R' U' F R' F' R2 F R' U R U' F' (12f)R U' R' F' U F R U R' U R U' R' (13f)R U2 R' U R' F' U' F U R2 U R' (12f)F' U2 F U2 F R' F' U2 R U R' U R (13f)R' U' F U F' R F2 R' F' R U' F' (12f)R2 U' F U R' U' F' U R' U' F R' F' (13f)R U2 R' U R' F' U' F U R2 U R' (12f)R' U' R U' R' U2 F R F' U2 F' U2 F (13f)F' U2 F U' F R U R' U' F2 U' F (12f)R' F' U' F U F' R' U' R F R' U R2 (13f)F U R' U' R F' R2 F R F' U R (12f)F2 U R' U' F U R U' F U R' F R (13f)F U R' U' R F' R2 F R F' U R (12f)R U' R' F' U F R U R' U R U' R' (13f)F' U2 F U' F R U R' U' F2 U' F (12f)F' U2 F U2 F R' F' U2 R U R' U R (13f)R U2 R' U R' F' U' F U R2 U R' (12f)R' F2 R U' F U' R U2 R' F' R' F' R (13f)

The probability of having to use one of the two generated algorithms is 1 in 706.909, or about 0.141%





Spoiler: 25+2



Given this list of 25:

1. R U R' U R U2 R'
5. R U R2 F R F2 U F
8. F R' F' R U2 R U2 R'
9. F R U R' U' F'
10. F R U R' U' R U R' U' F'
14. L F' L' U' L F L' F' U F
19. R U R' U' R' F R F'
22. R' U' R' F R F' U R
23. R2 U' R F R' U R2 U' R' F' R
25. F R U' R D R' U2 R D' R2 U' F'
26. R U R' U F' L' U L F U' R U' R'
27. L F R' F R F L' F R' F' L F L' R
30. R' F' R U R' U' R' F R2 U' R' U2 R
31. F R2 D R' U R D' R2 U' F'
32. R U' L' U R' U L U L' U L
36. R' L F R L' U2 R' L F R L'
39. R' F R2 B' R2 F' R2 B R'
40. R U2 R' U2 R' F R2 U R' U' F'
41. F R U' R' U2 R U R' F'
42. R U' L' U R' U L U L' U2 R U' L U R'
43. F R U' R' U' R U2 R' U' F'
44. R' U' R U R' F' R U R' U' R' F R2
45. L F L' R U R' U' L F' L'
47. F U R U2 R' U' R U R' F'
48. F R U R' U' R U' R' U R U R' F'

...Here are some generated pairs that each complete it:

F U2 F2 U' F2 U' F' U2 F R' F' R (12f)F' U' F2 R U R' U' F' U2 F R' F' R (13f)R' F R F' U2 F U F2 U F2 U2 F' (12f)F R' F' R U2 R' U' F' U F R2 U' R' (13f)R' F R F' U2 F U F2 U F2 U2 F' (12f)R U R2 F' U' F U R U2 R' F R F' (13f)R' F R F' U2 F U F2 U F2 U2 F' (12f)R' F R F' U2 F U R U' R' F2 U F (13f)

The probability of having to use one of the two generated algorithms is 1 in 555.429, or about 0.180%





Spoiler: 24+2



Given this list of 24:

2. R U R' U R U' R' U R U2 R'
5. R U R2 F R F2 U F
8. F R' F' R U2 R U2 R'
9. F R U R' U' F'
10. F R U R' U' R U R' U' F'
14. L F' L' U' L F L' F' U F
19. R U R' U' R' F R F'
20. L F R' F' L' F R F'
22. R' U' R' F R F' U R
25. F R U' R D R' U2 R D' R2 U' F'
26. R U R' U F' L' U L F U' R U' R'
27. L F R' F R F L' F R' F' L F L' R
30. R' F' R U R' U' R' F R2 U' R' U2 R
31. F R2 D R' U R D' R2 U' F'
34. R' U' R' F R F' R U' R' U2 R
36. R' L F R L' U2 R' L F R L'
40. R U2 R' U2 R' F R2 U R' U' F'
41. F R U' R' U2 R U R' F'
43. F R U' R' U' R U2 R' U' F'
44. R' U' R U R' F' R U R' U' R' F R2
45. L F L' R U R' U' L F' L'
46. R U R' U R U' R' U' R' F R F'
47. F U R U2 R' U' R U R' F'
48. F R U R' U' R U' R' U R U R' F'

...Here are some generated pairs that each complete it:

R' F' U F R U R2 F R F2 U' F U R (14f)R' F U R U' F' U2 F2 R' F U2 F' R F2 (14f)R' U' F' U' F R' F R F' R U' R' U' R (14f)R' U' F' U F2 R' F' R2 U' R' F' U' F R (14f)F U F' R2 F' R F2 U' F U F R' F R2 (14f)R' F' U F R U R2 F R F2 U' F U R (14f)F U R U R' F R' F' R F' U F U F' (14f)R' U' F' U F2 R' F' R2 U' R' F' U' F R (14f)R' U' F' U F2 R' F' R2 U' R' F' U' F R (14f)R' F U R U' F' U2 F2 R' F U2 F' R F2 (14f)R' U' F' U F2 R' F' R2 U' R' F' U' F R (14f)F2 R F' R U R U' R2 F R' F2 R' U R (14f)R' F' U F R U R2 F R F2 U' F U R (14f)F2 R F' R U R U' R2 F R' F2 R' U R (14f)

The probability of having to use one of the two generated algorithms is 1 in 457.412, or about 0.219%


I was unable to find a 23+2 set that worked. As you can see, the increase in probability of using a generated algorithm remains quite low even at 24+2, but the quality of the 2 generated algorithms is falling due to a lack of maneuvering room.



Kirjava said:


> Initial algorithm memorisation quantity isn't an issue. I'd rather reduce the number of cases using a generated algorithm (though they aren't that bad).
> 
> This is pretty much beyond what I asked for, and it's going to allow me to complete my system. Thank you so much! I'll mention you in my nobel prize acceptance speech.


Well, you're certainly welcome. While putting the finishing touches on this, I've had an audacious idea, and I'm thinking of taking this little experiment in a new direction, but it may take _*considerable*_ time to program. In fact, I'll probably just get frustrated and give up. But we'll see.


----------



## Lars Petrus (Apr 16, 2017)

Hi, anyone still interested in this thread!

I've been working on this "Combo Algs" stuff for a while. Digging in much harder on some ideas I had 10+ years ago. I've built a web site to explore this, and found some really small complete alg sets.

*Small Alg Sets*

Here I think of one "alg" as both mirrored versions, but not inverses. So I consider both mirrored Sune versions (F U F' U F U2 F' and F' U' F U' F' U2 F) the same, but AntiSune as a separate alg. I think the discussions here treat both mirrored and inverted versions the same, making it hard to compare.

The smallest set I've found that combines to solve all 3916 LL positions is these 39 algs: http://birdflu.lar5.com/alg_sets/1/algs. The resulting solutions are 15.22 moves on average.

The smallest set for all 494 LL positions with edges oriented is 14 algs: http://birdflu.lar5.com/alg_sets/4/algs. 15.18 moves average.

I searched pretty hard, but it's definitely possible smaller sets exist. If I understand the 24+2 solution above, that is 52 algs the way I count them, but that is of course not a fair comparison.

*The Web Site*

http://birdflu.lar5.com has all 43 million LL algs of 17 moves or less, organized by positions and categories, and some related features. It's easier for you to explore than for me to explain. I'm happy to explain or fix anything that's weird.

You can also use some predefined combo alg sets and get combo solutions for any position. For the "smallest" ones described above, use these links:
All LL
EO LL
*
Make your own!*

The coolest part is that you can easily *make your own combo sets!* First log in (WCA login) and pick the *Combos* section up left. Hit the *Create* button when ready to make your own. I thought about entering some of the sets discussed here, but there are too many. Let me know which one is "important", and I'll type it in!


----------



## xyzzy (Apr 16, 2017)

Lars Petrus said:


> The coolest part is that you can easily *make your own combo sets!* First log in (WCA login) and pick the *Combos* section up left. Hit the *Create* button when ready to make your own.



I just tried this, but I don't quite understand the user interface. It also reports stuff like "'N96893' is not combinable (yet)" for most things I throw at it.


----------



## Lars Petrus (Apr 16, 2017)

Ah yes. For an alg to be used in Combos, it needs to be prepared in the DB. The combinable ones are marked with *c* in the lists. Right now those are 540 mirror pairs that seemed the most useful to me.

It's not hard to add more, but I have to do it manually. So let me know which ones you need, and I should have them up pretty quickly. I assume/hope/pray the algs people actually use can be added pretty quick.

A big part of the UI is that all algs have names. N96893 is the Birdflu name for R U2 R' U' R U R' B' U R U R' U' B


----------



## xyzzy (Apr 16, 2017)

Not useful per se, but I think I have a set of 12 ZBLL algs (6 cases and their inverses) that can generate all of ZBLL:

bf4g / L1175
bf4k / J286 (this one and its inverse are already combinable)
bf4a / M16873
bb4l / N257450
Ff4h / K1880
bd4d / N96893


----------



## Lars Petrus (Apr 16, 2017)

Thanks! I added those.

That set does miss one position: http://birdflu.lar5.com/?pos=Cd4a (Setup: B' U2 B U' F' U B' U' F2 U' B U F')

Still, that easily beats my 14 alg set by adding any alg that covers that position. The fact that I didn't have 10 of those 12 in the system shows the deficiencies in my small set search. I've mostly focused on finding fast sets, not small, and it shows.

How did you find this one?


----------



## TDM (Apr 16, 2017)

Lars Petrus said:


> The smallest set for all 494 LL positions with edges oriented is 14 algs: http://birdflu.lar5.com/alg_sets/4/algs.


Wow. What I love about these are they're all nice cases. Most are standard COLLs/easy ZBLLs.
It's unusual how only J93 and J417 affect EO though. Do you know which cases they help solve?

I made a table with speed-optimal algs, though I doubt many people will be able to use it for speedsolving.


----------



## Lars Petrus (Apr 16, 2017)

If I remove J93/J417, only http://birdflu.lar5.com/?pos=Fo4A becomes unsolved, so that's why it's there. Kinda silly to have EO flipping algs in an EO set, but that's what dumb software will do.

When I made the "tiny" sets, only <= 10 moves algs were combinable, so that's why all the algs are short 

If you want to "upload"/play around with that 14 alg set, I'll add the missing algs.


----------



## xyzzy (Apr 17, 2017)

Lars Petrus said:


> How did you find this one?



I wrote a script a while back to calculate the coverage of any set and ran it for a few hours (it was really slow) on random sets of 6 cases. Strange that it missed one case though; maybe there's a bug in my code, or maybe I copied the results wrongly back when I ran it. I do have two other 6-alg sets that cover all but one case.

Edit: Oh, actually the code was correct, but I looked up the wrong case in Birdflu. Replacing Ff4h / K1880 with Fl4F / K701 fixed it—full coverage with 12 algs!


----------



## Niko Lopez (Jun 15, 2017)

10461394944000 said:


> what is the minumum number of algorithms needed such that for any last layer case, you can apply 2 algorithms and solve the cube?
> 
> it's probably some really silly thing that would be silly to recognize but it seems like an interesting question.
> 
> ...


So I'm thinking (I'm probably wrong) that if you want to be able to do 2LLL every time using the minimum amount of algs, you will have to learn winters variation (I don't know if that counts as a LL alg but for now I'm not counting it) then you will have to learn 3 olls and 21 plls so a total of 24 algs, if you count winters variation, then it would be a lot higher, in this case you would want to learn how to do intuitive edge control (which is no Algs or if you wanted to learn the alg version you can learn VHS) then learn the 7 cross oll cases and the 21 pll cases which comes out to 28.


----------



## Cale S (Jun 15, 2017)

Niko Lopez said:


> So I'm thinking (I'm probably wrong) that if you want to be able to do 2LLL every time using the minimum amount of algs, you will have to learn winters variation (I don't know if that counts as a LL alg but for now I'm not counting it) then you will have to learn 3 olls and 21 plls so a total of 24 algs, if you count winters variation, then it would be a lot higher, in this case you would want to learn how to do intuitive edge control (which is no Algs or if you wanted to learn the alg version you can learn VHS) then learn the 7 cross oll cases and the 21 pll cases which comes out to 28.



This question is about full LL, so you can't influence it with WV


----------



## Lucy Griffiths (Jun 23, 2017)

I think to understand the math here - I think we should understand the basics, called Fundamental counting principle. Once you get that, you can look at it as a path counting problem.


----------



## Abram Lookadoo (Jun 26, 2017)

i have found a way to solve this using 45 algorithms

preform M2 S2
look 1: use adf and auf then orient all the corners and edges (31 algs)
look 2: use adf and auf then permutate all the corners and edges (14 algs) then auf all the side permutation with the centers, and adf all the corners permutation with the E side pieces
preform M2 S2

for 3 look, i found a way that uses 22 algorithms

preform M2 S2
look 1: use auf and adf then solve edge orientation and corner permutation (11 algs)
look 2: use auf then solve corner orientation (7 algs)
look 3: use adf then solve edge permutation (4 algs) then adf the bottom corners to match the E edges, and auf the top edges to match the centers
preform M2 S2

this is effective enough to turn 1 look into 1289 algorithms


----------



## xyzzy (Jun 26, 2017)

Abram Lookadoo said:


> i have found a way to solve this using 45 algorithms
> 
> preform M2 S2
> look 1: use adf and auf then orient all the corners and edges (31 algs)
> ...



Very interesting ideas (using setup moves so you can reduce by both AUF and ADF is pretty cool), but (i) use the edit button instead of multiple posts and (ii) don't post the same thing in multiple threads.


----------

