# Objectively, the best 3x3x3 method...



## FiWiFaKi (Dec 6, 2016)

Hello speedcubers, my first post and thread on these forums. I've been lurking for a while, and I decided to create this post, due to the displeasure of resources I found online. A little bit about my cubing background - I started maybe 4 months ago, being 22, I cube all over the place, depending on what my ever changing work schedule allows. Some times I don't get to cube for two days, and then sometimes I have 5 hours to improve each day on the weekend. Currently, I average 30 seconds, doing algorithm F2L (I used intuition to figure out why they do the algorithms, so it's kind of like structured intuitive F2L), PLL, and 30~ OLL (rest I do 2 look OLL). My turning speed and look-ahead in F2l is rather weak, so I'm mostly practicing that while I learn the rest of the OLL's.

So onto the question I'm trying to address and discuss: *What is objectively the best method to solve a 3x3x3 cube, where the goal is to achieve a fastest average time, for an average, or somewhat above average person in terms of their talent, over a period of several years (say 2-5 years), of moderate dedication (45 minutes - 2 hours average cubing every day of that several year time period).
*
I think naturally, many people here and cubers around the world, take a in my opinion, primitive way to improve at the Rubik's cube, maybe this can be attributed to a relatively young age of the average cuber, but I'm trying to take a more scientific (note, not mathematical) than artistic approach in figuring out what the best method is.

When someone gets into speedcubing, the structure of learning starts off very easy... You go from one of the many variants of the beginners method, to learning F2L + 4LLL (using the 3 OLL edges algorithms, 7 OLL corners algorithms, then 2 different PLL corner algorithms, then 4 different PLL edge algorithms, and voila)... After that, you learn full PLL and full OLL. Up to this point, resources are very clear and centralized, from here on out however, everything becomes all over the place, and there's in my opinion no clear way forward.

I think that the primary reason is that once you learn full CFOP, the available improvement to make is say only a 20% improvement to your time even if you use every resource available. As all of us cubers know, 90%~ of cubing improvement comes from practice, and hence why someone like Feliks Zemdegs could use an unaltered 4LLL, and still average 10-12 seconds. When the differences become this small, it becomes difficult to objectively analyze which method is better... Because someone only a little bit more practiced than me can solve the cube faster with an inferior method. The result of this becomes very vague advice to people on the forums: "get better with look-ahead, go slow to see how the pieces move with the F2L algorithms", "be able to do all the cases blind", etc.

My opinion is that the majority of people are similar enough biologically, that there should be one optimal method that fits all. Particularly after full CFOP is learnt. One of my gripes for example is when someone says: "Go to some algorithm site and try the other PLL/OLL algorithms to see if there's algorithms that are better for you"... I think we all have the same hands, and that when practiced enough, there should be one set of algorithms that is best for almost all people, just as one example.

So with the approach, let's assume that the best method to solve a cube involves solving the cube in layers, for me this is so broad, so let's take the empirical evidence that the CFOP F2L derived methods for solving the first two layers are the fastest. Which leaves us with the last layer. Now there is the tradeoff between memorization + inspection versus turns required. In the ideal world with a cyborg's mind, we would be doing 1LLL, but the 3915 algorithms has been determined to be impractical for the average (or even the best) human mind to implement effectively. Which means we need to break it down into two steps.

This can be done in many ways, but when we think about what is optimal, we know that breaking it into two steps we know that the rough trend is that: (# of moves step 1)*(# of moves step 2) = (# of moves if done in one step). Since we're trying to minimize inspection time, the less algorithms we need to recognize the better, which means it's desirable to make the size of both steps roughly the same. OLL/PLL is 57 + 21, so I'm sure that some other organization where the two steps could be broken down into say 34~ + 34~ steps would require less to remember, but OLL/PLL is really successful due to the ease of inspection, only having to look at one color blocks in OLL, and notable features in PLL.

Nothing is stopping us from doing CLL + ELL (which is 42 + 29, hence less algorithms than OLL + PLL, but the inspection suffers). Or we could do a far more imbalanced approach, such as: OLLCP + EPLL (331 + 4 algorithms, demonstrating why making both steps roughly in even quantity is preferable)... Or you could combine OCLL + ZBLL (7 + 493 algorithms, once again demonstrating why it's not good to have one step much bigger than the other). Nothing is stopping us from only having categorization of corner OLL, edge OLL, corner PLL, edge PLL, and splitting those four groups into two batches to perform them. We could have a two step method that does OLL+PLL for two edges and two corners, and then a second step that does OLL+PLL for two edges and two corners as our last layer, that might potentially create a more even last layer to solve... However this is all at the cost of inspection time, and hence why, I feel like quite comfortable to say, that if you don't to any prep moves on the last layer during the building of the F2L, then OLL+PLL is the best.

Currently, there's two ways that I see people trying to improve the last layer from what is seen as the classic 2LLL. One is simply to start learning more cases from the 1LLL subset, in order to essentially get a PLL skip. This is the reasoning I see with why some top cubers, learning some PLL skip cases, learning COLL, so on and so forth. The percentage of the skip is rather small, and I suppose it's fairly effective at getting one good time, but not so good at improving your average, which is the question in discussion (you might be going from 57 OLL's to say 57OLL + 42COLL + 20 PLL skips, but that's still a small percentage of the 4000~ 1LL cases).

The other method people take is to do some setup moves during F2L, which most commonly is in the form of partial edge control to not receive dot OLL cases (which are what, 1 move longer on average? lol)... Doing the Winter Variation, which shows up roughly (?) 27/864 or, 3.125% of the time saves around 5 moves (8 moves~ versus 9 moves~ for OLL, minus 4 moves for the F2L insert)... Full VLS can be learnt at a cost of 432 algorithms, for an average savings of 3~ moves (9.74 moves/ VLS versus 9 moves for OLL plus the 4 to insert)... It starts to beg the question, is it worth it? Is needing to recognize one of 432 algorithms to save 3 moves 50% of the time worth it? Or if you learn the cases selectively, you might slow your other times down, seeing you don't know this case, while you could have been inserting the F2L pair normally in that time. I believe the only other popular case to mention is VHLS (32 algorithms) or ZBLS (302 algorithms), followed by ZBLL (493 algorithms)... This is more theoretical now, but I suppose it'd be the progression for someone very dedicated. Anyway, the point is that these are all the popular extensions to CFOP, are they any good, are they worthwhile, which is the best? Do people only do them for the bragging rights at this point? There's no definitive answer that I can find of where to progress your last layer once you've mastered everything in CFOP.

So what is the best last layer method? And on that note, what is the best inspection system, is inspecting a PLL by looking at subtle differences on two edges quicker than having a quick whiff of 3 faces and getting to see larger features? Is there any method to inspecting OLL's, a better classification and data structure than simply associating the shape that you see on the top with the yellow blocks (assuming white cross), and then looking for a key pattern on the others? Is it worthwhile to learn OLL algorithms for all the different angles of OLL's to prevent a cube/top face rotation, but instead require more processing time in your brain to recall the algorithm (it'd be almost 4x the algorithms, some OLL's have symmetry and can be performed from two different angles)... If yes to some, how much is enough, what is the ideal amount? Same goes for PLL... The question once again comes back to, where to go to after learning full CFOP?

We can discuss many things, many of which haven't really been scientifically tested. People here seem to like color neutrality, but badmephisto says that the average moves from a white cross is 5.81, while dual cross is 5.39, and color neutral is 4.81. Is saving the 0.5-1 moves worth it? Sure, you can look ahead a bit more, and start thinking about your extended cross more, but you also need to spend some time finding which cross will be the best, and the obvious increased inspection time during F2L (and especially other stuff like COLL of ZBLL if you go down that route).

Some people will say, go learn some advanced F2L cases, but which ones, they're all over the place, and most people try to be more flashy than useful in the videos that I've seen. To what extent should you minimize rotations in your F2L, at what point does it just not become worth it anymore. Even the basic stuff, like is it worth going a tad slower to keep track of all your empty slots and potentially reduce your insertions by a move or two, or use the tried and true without using other slots to go faster? Is multi-slotting worth the time? Use generic algorithm to get rid of stuck pieces, or learn them for every orientation? Misaligned pairs algorithms?

anyway, like I mentioned at the start, it's difficult to quantify what is best going forward, because these are such minor improvements to CFOP, and practice will make a far larger difference than these in most cases, but once you approach your personal limit after years of practice, I believe these differences makes a noticeable impact - say the difference between a 6.5-7 second average, versus 8 second average. Naturally, most people wont get to that point, and so most people wont notice they have an imperfect system because hey, they're solving the cube in 12 seconds, which is faster than their peers, or just good enough for them. However, speaking for me personally, I'd be satisfying for me to know that what I'm learning is the best system there is for someone like me, even though I might never reach its full potential... And hence even though I know practicing instead of even typing this out would probably yield me more benefit in terms of speed, for my peace of mind, and to promote discussion, I'd like to discuss here the answer to the above question.

I'm aware that this isn't the place to ask questions, but I don't feel that this thread is so much about personal gain, but rather I hope it to become a somewhat scientific discussion about what best is given the limitations of the human body and mind, and hence I believe it to be acceptable for this forum. If not, feel free to move or whatever 

TLDR: All resources expanding on full CFOP are all over the place, with no unanimous consensus on the way to go forward.


----------



## Loiloiloi (Dec 6, 2016)

Best introduction thread ever


----------



## obelisk477 (Dec 6, 2016)

ZBLS + ZBLL.

solved your problem


----------



## mark49152 (Dec 6, 2016)

Welcome to the forums.

There's no "objectively best" method, because to arrive at that conclusion you would need to define a single, quantifiable measure of how good a method is, so that you can score and compare methods. In practice, there are many factors that affect how fast a method can be. Not all of those can be easily quantified, they can be traded off against each other, and different people benefit differently from each. We don't all have the same hands - or eyes, or brains. 

For example, the most common forum argument is whether Roux or CFOP is the better method. Maybe the most obvious objective measure of a method is move count, and Roux generally uses significantly fewer moves than CFOP, so it wins on that score. However, more of the steps are intuitive rather than algorithmic, and that generally results in lower overall TPS. It also involves lots of M slices. Some people perform better at those things, others don't. How can you account for those factors in an objective score? You can't. They are subjective.

The same applies to the choices to progress beyond standard CFOP LL. There are many options requiring different skills and delivering different benefits that may depend on where the strengths and weaknesses of the solver lie. Although objective comments can be made about them, there's no unified objective "score".


----------



## TDM (Dec 6, 2016)

FiWiFaKi said:


> So with the approach, let's assume that the best method to solve a cube involves solving the cube in layers, for me this is so broad, so let's take the empirical evidence that the CFOP F2L derived methods for solving the first two layers are the fastest.


Yeah, that's a very big assumption to make. I don't agree. Or at least, the layers don't have to be D->E->U. It doesn't look like you've considered L->R->M or L->M->R at all.



mark49152 said:


> However, more of the steps are intuitive rather than algorithmic, and that generally results in lower overall TPS.


I don't agree with this either  SB quickly becomes as "algorithmic" as F2L (i.e. requires lookahead but is still just spamming algs), and I'd actually say it's easier to get a high TPS on SB because it's rotationless <R, Rw, U>. I agree with you for FB - but the same could be said for cross.


----------



## mark49152 (Dec 6, 2016)

TDM said:


> I don't agree with this either


Which bits, that more of the steps in Roux are intuitive, or that intuitive steps generally have lower TPS than algorithmic?

Either way, I stand by both. I said "generally" because it was a general statement, that although there might be exceptions or other factors, intuitive steps will generally have lower TPS. The reason being that you're having to watch the pieces and plan your moves while solving. For expert solvers that might slow them down very little, if at all, but certainly it's never going to make them faster, compared to a fixed sequence of moves executed from muscle memory alone. Assuming other things are equal, like ergonomics.

Of course, we're still waiting for a few more Roux solvers to step up and prove they can really exploit the move count and ergonomic advantages of Roux to get among the fastest CFOP solvers .


----------



## TDM (Dec 6, 2016)

mark49152 said:


> Which bits, that more of the steps in Roux are intuitive, or that intuitive steps generally have lower TPS than algorithmic?
> 
> Either way, I stand by both. I said "generally" because it was a general statement, that although there might be exceptions or other factors, intuitive steps will generally have lower TPS. The reason being that you're having to watch the pieces and plan your moves while solving. For expert solvers that might slow them down very little, if at all, but certainly it's never going to make them faster, compared to a fixed sequence of moves executed from muscle memory alone. Assuming other things are equal, like ergonomics.
> 
> Of course, we're still waiting for a few more Roux solvers to step up and prove they can really exploit the move count and ergonomic advantages of Roux to get among the fastest CFOP solvers .


The part about lower TPS. It does become muscle memory, just like how F2L is muscle memory and turning at full speed. I don't think of any of my F2L algs as being intuitive any more.


----------



## mark49152 (Dec 6, 2016)

TDM said:


> The part about lower TPS. It does become muscle memory, just like how F2L is muscle memory and turning at full speed. I don't think of any of my F2L algs as being intuitive any more.


Yeah of course it becomes largely muscle memory but it is not a fixed sequence and you still have to look and think while you do it.

Here's an experiment. Do a Roux solve and time one of the "intuitive" steps. Not just a trigger, but a whole block or LSE. Now reconstruct the moves you made, go practise it 1000 times, then time it again and report back whether you got faster or slower.

My assertion is that most cubers will get faster because they eliminate any delay due to recognition or thinking, and transitions between triggers get optimized and committed to muscle memory too. World class solvers may be so good at it the first time that the margin of improvement is much smaller, but my assertion is that nobody will get slower with practice. Since in the ideal case TPS would be only equal and in any non-ideal case it would be lower, it's a reasonable statement that TPS is generally lower for intuitive steps.

Just to be clear, I am talking about overall TPS, not peak TPS.


----------



## AlphaSheep (Dec 6, 2016)

FiWiFaKi said:


> My opinion is that the majority of people are similar enough biologically, that there should be one optimal method that fits all. Particularly after full CFOP is learnt. One of my gripes for example is when someone says: "Go to some algorithm site and try the other PLL/OLL algorithms to see if there's algorithms that are better for you"... I think we all have the same hands, and that when practiced enough, there should be one set of algorithms that is best for almost all people, just as one example.



Firstly, I think this is completely and utterly wrong. People have differing levels of dexterity. People's hands can differ significantly in structure - some people have wide hands with short stubby fingers, some have long thin fingers. Some have fingers at different angles. Some people have very small hands. All of these affect which moves are easy to perform. These are things that do not change much with practice. All that comes with practice is ways to overcome these differences, and there's definitely a point where some people will be faster with, for example an algorithm with a cube rotation, whereas others may be faster with a rotationless alg that requires an awkward fingertrick. 

Secondly, for wanting to be objective, this post is filled with a lot of subjective opinions - and rightly so, because I don't think that there is an objective answer that suits everyone.


----------



## mDiPalma (Dec 6, 2016)

Also, biology only impacts a fraction of cube solving. Psychology plays a surprisingly significant role. Some people are better at 3-D abstract thinking and can identify 10-move 3x2x2 blocks at a glance. Other people have trouble tracking even the simplest of first F2L pairs. Some people can see patterns in algs with ease - others have to drill it over and over into muscle memory. Some people are less narrow-sighted and can see sequences during a speedsolve that other people would completely ignore. I think these hold true even as proficiency goes to infinity.

I believe there is an optimal human method, set to certain constraints, but I think that it varies from person to person.

For most people, I personally think that optimal method is more similar to Snyder in structure than to CFOP, Roux, or ZZ. But maybe not for everyone.


----------



## Sion (Dec 6, 2016)

Cubing methods are really just a slew of tradeoffs for one preference to another. Heck, it's even why the Roux v. CFOP debates even occured. 

Cfop is a recall-memory method. It's where you get the algorithm, and apply it. However, from my experience, it does lower your efficiency, although you don't need to slow down since all the patterns are in your muscle memory.

Roux is the opposite story. Everyone can agree that Roux is better in its movecount and theoretically, efficiency. However, it's problem is the lack of structure and recall/memory, hence why roux solvers turn much slower compared to CFOP Solvers.

In conclusion, there is no one best method, because you are always giving something up in turn for another methods traits.


----------



## Chree (Dec 6, 2016)

tl;dr - Zeroing.

Duh.


----------



## genericcuber666 (Dec 6, 2016)

this thread is going to be fun to watch ill just wait a few hours for the other people in the world to find it lol.
just do what you like, your argument is (as dumb) as saying we are all biologically similar enough to make the assumption that bob liking blue cheese means we all like it.

also like sion said its tradeoffs, do you liek algs go cfop, do you like fluid fast tps f2l go zz, do you like efficiency and low algs go roux its all up to you 

also what you said about there not being any clear way to improve with cfop is what initially made me leave the method.
imo zz and roux are easier to progress past sub 20 than cfop, you have more help than 
"do solves"
"look ahead"
"learn that other alg set"
"do this thing blind"

it just dosent help i guess its an inevitable down side to being the most popular method there will always be ambiguity.


----------



## Loiloiloi (Dec 6, 2016)

genericcuber666 said:


> imo zz and roux are easier to progress past sub 20 than cfop, you have more help than
> "do solves"
> "look ahead"
> "learn that other alg set"
> "do this thing blind"



I think this doesn't get brought up enough sometimes. Sure there's a lot of resources for CFOP, but it's really just algorithms, since cross and F2L are intuitive steps which there's not that many good resources for.


----------



## Chree (Dec 6, 2016)

Loiloiloi said:


> I think this doesn't get brought up enough sometimes. Sure there's a lot of resources for CFOP, but it's really just algorithms, since cross and F2L are intuitive steps which there's not that many good resources for.



There are plenty of great resources for getting beyond Intuitive F2L. Just to name a few:
- Algdb.net has just way too many helpful algorithms, it's really a problem.
- Chris Olson's Alg of the Week is frequently showcasing F2L cases, and contrasts them with conventional solutions.
- Also Collin Burns, Westonian, CBC, and Feliks all have F2L tips videos that are incredibly helpful. And they're not just alg heavy, but technique driven.
- Last Slot subsets, which are arguably part of F2L: WVLS, VHLS, ZBLS, VLS/HLS/OLS. All in the speedsolving wiki or linked somewhere in a searchable thread.

For Cross, on the other hand, I agree with you. It's assumed to be such a simple step that I think not enough has been said about how complex it can be.


----------



## Loiloiloi (Dec 7, 2016)

Chree said:


> There are plenty of great resources for getting beyond Intuitive F2L. Just to name a few:
> - Algdb.net has just way too many helpful algorithms, it's really a problem.
> - Chris Olson's Alg of the Week is frequently showcasing F2L cases, and contrasts them with conventional solutions.
> - Also Collin Burns, Westonian, CBC, and Feliks all have F2L tips videos that are incredibly helpful. And they're not just alg heavy, but technique driven.
> ...


In response to the first two, those are both algorithm resources, which I already stated there are plenty of. For the second 2, those are both difficult for a beginner or maybe even intermediate cuber to use in practice. But you do make a good point, f2l subsets didn't come to mind when I made my original post.


----------



## CornerCutter (Dec 7, 2016)

I think the best method depends on the person.

But of course I think CFOP is better.


----------



## FiWiFaKi (Dec 7, 2016)

Wow, I didn't expect so many replies so quickly, I'll try to reply to everyone one by one if I have anything meaningful to say:

@mark49152 I agree with you that not all factors are easily quantified, but it seems like not much of an attempt has even been made. You know, to compare at which point the tradeoff becomes no longer worth it... For example a 1LLL is an obvious case of recognition and recall times being too long, and not worth it, and that is pretty clear. When we compare two different methods, like say OLL + PLL versus CLL + ELL, it becomes tougher, so creating some empirical formula... #of algs versus inspection time... Empirical formulas of say Speed = b*e^(c*time)... I know that this starts as really rudimentary, but it's to build a framework for analyzing this in the future.

It is my opinion that we need better data structures to be able to digest all the data in order to understand the methods more clearly. Right now when I go on the wiki into the 3x3x3 last layer substeps, there's 30 different pages, all very disconnected, each requiring a paragraph to read before you know what exactly it does.

*So yesterday I was bored, and I created what I'll call SALL Notation (Simplified Ambrus Last Layer Notation):*

Each LL algorithm is displayed using (a,b,c,d);(e,f,g,h)
a = EO, e = preserved EO
b = CO, f = preserved CO
c = CP, g = preserved CP
d = EP, h = preserved EP

a -> d chosen in order of classic 4LLL with 2look OLL and 2look PLL to help people remember.

So for example, the four steps in 4LLL would be displayed as such:
EOLL: (1,0,0,0);(1,0,0,0)
OCLL: (0,1,0,0);(1,1,0,0)
CPLL: (0,0,1,0);(1,1,1,0)
EPLL: (0,0,0,1);(1,1,1,1)

To "calculate" which case you'd obtain at the end of an algorithm sequence starting at fully completed F2L (0,0,0,0), you'd do as such:

(0,0,0,0) -> apply EOLL -> replace any zeroes with ones corresponding to modification coordinates represented by a,b,c,d -> (1,0,0,0) -> the locations of the preserve set which are zero roll back the cube value to zero (in this case the last 3 values would be set to zero) -> apply OCLL to (1,0,0,0) -> (1,1,0,0) -> set last two values as zero, which corresponds to the OCLL preserve set -> apply CPLL to (1,1,0,0) -> (1,1,1,0) - > do the preserve set operation -> apply EPLL -> (1,1,1,1) -> apply preserve set operation of EPLL, which is (1,1,1,1), so no values are rolled back -> (1,1,1,1) is the solved cube state, tada!

CFOP LL is the same idea, where OLL is: (1,1,0,0);(1,1,0,0) and PLL is (0,0,1,1);(1,1,1,1).

So I went through to calculate exactly how many algorithms and how many solve paths there are in this, just about every method on LL wiki can be classified by one of these cases. So we will classify algorithms based on how many mods they perform, i.e... How many of the EO,CO,CP,EP they perform.

*Mod set 1:* Performs only one of EO,CO,CP,EP... EOLL, OCLL, CPLL, EPLL are some examples. In this mod set, there are 4 algorithm groups, each one which performs one of EO, CO, CP, or EP... But each group has 8 algorithm sets, to account for all the different preserve combinations, giving Mod set 1 a total of 32 algorithm sets.
*Mod set 2: *Performs two of EO,CO,CP,EP... So this is your OLL, CLL, ELL, COLL, LLEG, CPEOLL, PLL, L4C, OCELL, 2GLL... etc. There are 6 algorithm groups in mod set 2, each with four algorithm sets, which gives Mod set 2 a total of 24 algorithms. Plenty of these are available, as when doing last layer with no special F2L steps, a Mod set 2 + Mod set 2 2LLL is less algorithm heavy, so varying combination of these are explored.
*Mod set 3: *Performs three of EO,CO,CP,EP... Now these are the crazy algorithm sets. There are 4 algorithm groups (each missing one of the 4 modifications)... And then for each case, it either preserves the one modification it does, or it doesn't... Which gives us 8 algorithm sets for this case. I find the ones that don't rather unhelpful, since you'll need a 2LLL, but the ones that do allow for some special LS or L2S insertion to orient stuff. The two popular cases in this set 8 of algorithms that are well known are ZBLL and OLLCP. Personally I think that (1,0,1,1);(1,1,1,1) - Everything but corner orientation but preserves everything should have a lot of potential (if anyone knows if it's been looked at, let me know), when combined with a ZBLS type method, but one which creates the checkerboard on the top face, not the cross.
*Mod set 4: *Performs all of EO,CO,CP,EP... so like uhh... that's 1LLL . Only one case, nothing more to it.

So there we have it, we have 32 + 24 + 8 + 1 algorithms, giving a grand total of 65 algorithms for this specific subset of all algorithms to solve last layer. Of course we could expand this by instead of always either permuting or orienting a group of four blocks, we could treat each block individually. Thus instead of creating combination with these four steps, we would be creating combinations with sixteen modifications, drastically increasing the count of algorithm sets. This would allow for solving the LL in for example... First step you orient and permute a 2x2x1 block in LL (6 of the 16 modifications), and solve the rest in the second step (10 of the 16 modifications). Most cases would have awful inspection, but some could be seen as logical, and might require easier moves to solve.

Then of course, we could always expand this even further, by not assuming that the process to solve a cube involves an orientation and a permutation... But instead simplifying to some other case, and solving from there (the same way that some solvers who use EO methods might solve a superflip if they have a lot of bad edges, and perform the algorithm to flip it back once they're finished with very high TPS. We could keep going, expanding it into 2nd layer, LSE, etc.

So I suppose I went a bit off topic, but really what I'm trying to get across is that there are ways to put information into data structures much better than is done in the community.

You say things like: "For example, the most common forum argument is whether Roux or CFOP is the better method. Maybe the most obvious objective measure of a method is move count, and Roux generally uses significantly fewer moves than CFOP, so it wins on that score."

How is move score the most objective measure? When my question was: *What is objectively the best method to solve a 3x3x3 cube, where the goal is to achieve a fastest average time, for an average, or somewhat above average person in terms of their talent, over a period of several years (say 2-5 years), of moderate dedication (45 minutes - 2 hours average cubing every day of that several year time period).
*
Lowest HTM doesn't matter at all for my question, so not sure why you really brought it up. I want a way to determine what the average maximum trained mind is capable of, and based on those parameters, start to develop relationships to quantify the tradeoffs between varying things, etc. My viewpoint is that nobody is inherently bad at math, if they're in a mentality where they're trying to learn how to be more math oriented. You might keep thinking about math the same way an artist thinks about it, and hence you'll make no progress, but that's because you aren't learning effectively. I have an Engineering and Economics degree, and I truly believe that 99% of the population could achieve it if it's what they set their mind to... I don't think that we're that much different in the brain when it comes to solving a Rubik's cube.

There is a fairly unified score, and that is the time the average person is able to solve the Rubik's cube on average, assuming they've followed the 1 hour a day~ for 2-5 years criteria, and have been learning effectively (hence the average or above average intelligence part)... Then you correct for biological factors like age, and potentially gender or race (if a correlation is found). It's just an extremely difficult experiment to perform, where you get honest answers (it's sometimes like asking men for their size)... Very time consuming, and just a lot to control for, but in theory it can certainly be performed (if I had a billion dollars to pay a large enough group of people to do this).

So I guess I'm not happy with the pessimist approach here demonstrated by many: Oh, it can't be done, it's different for everyone. I never said it's a small undertaking, but I think it can be taken much farther than most would give it credit for.


----------



## 4Chan (Dec 7, 2016)

Haha, you remind me of Liquid KiwiKaki
You're from Canada too!


----------



## xyzzy (Dec 7, 2016)

FiWiFaKi said:


> Personally I think that (1,0,1,1);(1,1,1,1) - Everything but corner orientation but preserves everything should have a lot of potential (if anyone knows if it's been looked at, let me know), when combined with a ZBLS type method, but one which creates the checkerboard on the top face, not the cross.



This is basically Winter variation (without EO) followed by COALL, which has been suggested many times and shown to be bad about as many times. There are a few nice algs (like R2 B' R' B R' F' U' F R U R'), but it's still a pretty large alg set (150-ish cases iirc) and many cases don't have good algs. (Or don't have good algs _yet_. You can be the first to demonstrate its potential!)



FiWiFaKi said:


> So there we have it, we have 32 + 24 + 8 + 1 algorithms, giving a grand total of 65 algorithms for this specific subset of all algorithms to solve last layer.



I think you mean algorithm _sets_.



FiWiFaKi said:


> First step you orient and permute a 2x2x1 block in LL (6 of the 16 modifications), and solve the rest in the second step (10 of the 16 modifications). Most cases would have awful inspection, but some could be seen as logical, and might require easier moves to solve.



I experimented with this (with edges already oriented), because this was one of the most even splits of ZBLL into two looks, and it also has the highest skip probabilities among 2-look ZBLL systems because you're free to choose any one of the four 2x2 blocks for the first step. I couldn't come up with a good way to recognise the cases, but I didn't try very hard. For a similar idea, see also Speed Heise, which also requires EO to already be done.

Without edge orientation already done, this gets a lot nastier in terms of the number of algs needed, but it might still be viable as a speedsolving method with relatively high skip rates.



FiWiFaKi said:


> So I suppose I went a bit off topic, but really what I'm trying to get across is that there are ways to put information into data structures much better than is done in the community.



Agreed, but I also don't think that your (1,1,0,0);(1,1,0,0) notation is better. Did CP come first, or was it EP first? I think something like EO+CO/EP+CP for OLL/PLL, EO+CP/EP+CO for CPEOLL/2GLL, etc. might be clearer, where it's understood that later steps preserve what's done in earlier steps. There are weird LL systems that can't be nicely characterised like that, but, well, stuff happens.



FiWiFaKi said:


> So I guess I'm not happy with the pessimist approach here demonstrated by many: Oh, it can't be done, it's different for everyone. I never said it's a small undertaking, but I think it can be taken much farther than most would give it credit for.



Rather than being pessimistic, I'd call it pragmatic. It can be done, but as you've pointed out, it'd also take a lot of resources. Speedcubing isn't so huge of a sport that a large-scale experiment like what you suggest makes sense for anybody to sponsor. As for applying the scientific method to optimise solving, there are still major hurdles. It's different for everyone (yes, it really is! I don't see how this is being pessimistic, by the way), and for a single person to optimise their own solving, this would require going through thousands of solves with every variation of every method.

Performing this kind of intensive experiment is further complicated by the possibility that you improve during the experiment (you want to measure the difference between methods, not to measure your improvement across every method), so you have to adjust for that by switching between methods every so often, but then it might throw you off to switch between methods constantly, and then you have to adjust for _that_ too.

This is by no means impossible. It's just very boring.


----------



## Aaron Lau (Dec 7, 2016)

Hey shouldn't the tldr be at the start of the post instead of the end i mean people will only come across that once they've finished reading the entire post...


----------



## FiWiFaKi (Dec 7, 2016)

@4Chan That's where the ID came from ^^

@xyzzy thank you for the long and well thought out reply, much appreciated.

Thanks for letting me know that it's not seen as good, I suppose I'd have to look at the algorithms closer to be 100% sure, but I'll take your word for it.

-VHLS + ZBLL are 32 (6.63moves) + 493 (12.08 algorithms)... Replacing VHLS with ZBLS saves 3 moves, but now we're up to 795 algs. With a LS Insert+LL move count of 15.63 (ZBLS+ZBLL is more, but ZBLS solves the F2L case, not inserts it, I tried to adjust it to only take into account the solving). Versus CFOP: LS (3.7 moves) + OLL (9.7 moves) + PLL (11.8 moves)... So CFOP LS+LL is 25.2~ moves.
-WV/SV + COALL (first time hearing this) is 108, though half are perfect mirrors, and taking your 150 for COALL, that's significantly less algs. Move count is and (8.07+9.4)/2, so 8.74 moves For WV/SV

So now we have in nice order of algorithm trade off:
CFOP: 78 algs at 25.2 moves
Wv/SV+ COALL: 258 algs at 8.74+(12)? = 20.74 moves
VHLS+ZBLL: 525 algs at 18.71
ZBLS+ZBLL: 795 algs at 15.63

Hmm, maybe I will need to look into it more, because we can see it's all tradeoffs, but I kind of like it in theory anyway. The issue with VHLS + ZBLL is that it splits the two into very uneven steps, where VHLS is way too easy compared to ZBLL, and that raises algorithm count too much. Instead this VW/SV+COALL would make the first step more difficult, and the 2nd step much faster... And edge orientation to me is quicker to notice as well... Really depends on how the algs look.

ZB method is designed when both ZBLS+ZBLL is used since then both steps are fairly "balanced", while if we did the VHLS -> ZBLS expansion for VW/SV -> solving any F2L case, you'd I think have an algorithm count in the 1,000-2,000 range, so that's no longer practical. But the entire basis for this is that ZB is balanced but requiring too high an algorithm count, while WV/SV+COALL is balanced at 1/3rd the alg count, but 5-6 moves longer. Hmm, I'll have to look at some cubing resources online to analyze this stuff, it's got me interested.

Interesting to hear that you've tried the building a 2x2x2 block, maybe I got the idea when I was quickly skimming through every wiki article, not sure.

Actually, with those 65 algorithm sets, when you start with a fully completed F2L, there's exactly 63 non-redundant solution paths. By non-redundant I mean, don't preserve EO/CO/CP/EP if you're doing an OCLL variant first step in a 4LLL.

The solution paths are:
_4 modifications_ - 1LLL *1 case*
_3+1 modifications_ - 2LLL, OLLCP + quick finish type things *4 cases*
_1+3 modifications_ - 2LLL, ZBLL type of thing *4 cases*
_2+2 modifications_ - 2LLL, there is: OLL/PLL, CLL/ELL, LLEF+L4C, CPEOLL+2GLL, PLL/OLL (sounds awful lol), inverse of the 2GLL method (doesn't exist yet?)... I feel safe to say that out of those 6, OLL/PLL is the fastest. *6 cases*
_(2+1+1) or (1+2+1) or (1+1+2) modifications_ - 3LLL, I guess the only thing similar is was it BLL where the guy tried to get a 3LLL in under 25 algorithms? Well, there's many different options to explore here for low algorithm solving, not really practical though. *3x12 cases*
_1+1+1+1 modifications _- 4LLL, just different order for each of the steps. I feel like the classic LL method is best, I think doing permutation before orientation on any piece is very bad, so that automatically would cut the cases in half to six. *12 cases
*
So these are all the 63 solution methods to last layer solving with completed F2L... Assuming you're always solving all EO, or all CO, or all, CP, or all EP in a single step, which almost every method does.

Anyway, I keep going a bit off topic, the goal isn't to theorycraft new variations here. My goal was simply to show a bit of structure when analyzing these things... Because if we're to improve on methods, and the structure of all the information out there is really poor, then it's difficult to build on it. The imperial vs metric argument all over again, just applied to speedcubing.

As for performing this kind of experiment, I agree that it's not really feasible to be performed by a single person. I am under the impression that after 2 years of extensive practice with a method, there won't be much improvement to be made, as we can see with many of the top cubers... However to keep that motivation for an answer that isn't meaningful in the grand scheme of things, and just stuff like getting slower with age. Anyway, what you discussed is kind of the brute force method, I feel like there must be some structure to performing some experiments where we can gain new insights, and uncover fragments of the answer one piece at a time.

@Aaron Lau https://community.spiceworks.com/topic/362862-tl-dr-top-or-bottom

I think the internet is against you


----------



## mark49152 (Dec 7, 2016)

FiWiFaKi said:


> How is move score the most objective measure? When my question was: *What is objectively the best method to solve a 3x3x3 cube, where the goal is to achieve a fastest average time ...*


No, I said it's the most obvious objective measure of a method. Or at least, I can't think of one that is more obvious. 



FiWiFaKi said:


> Lowest HTM doesn't matter at all for my question, so not sure why you really brought it up.


Move count is of course relevant to how fast a method can be. That's so obvious that I'm not sure I haven't misunderstood your point.

Much of your analysis seems to be focused on algorithm count. Sure, algorithm count is an objective measure, but it has no bearing on your question unless you factor in lost time caused by the difficulty of recalling large alg sets, which is subjective and not directly reflected in the alg count measure anyway.

I'm not disagreeing with taking a scientific approach to method analysis, just pointing out that there is a lot of subjectivity involved and the presence of the word "objectively" in your question really makes it impossible to answer. Even your proposed mass training/solving experiment would not tell us what is objectively the best method. That would be a statistical conclusion, not an objective one.


----------



## AlphaSheep (Dec 7, 2016)

Given sufficient time to learn all of the algorithms in a set, and given plenty of time to practice them until you're completely familiar with them, I really don't think a large algorithm set (hundreds of algs) is that much of a big deal.

I don't have as much experience as many others, but I'm about 100 algs into learning ZBLL and I don't find recognition for many cases to take that long. I'll try explain what its like using a non-cubing analogy. You know how to recognise many types of animals, so you can recognise cats, dogs, horses, birds, etc. You recognise these animals by identifying the features that set them apart. Say you then learn how to identify many different breeds of dog. Obviously this has no impact on your ability to recognise cats, horses, birds, etc. Once again you recognise the breeds of dogs by identifying features that set them apart. When you see an animal, you take in all of the features at once and your brain recognises what it is based on connections it has built between the animal and its features. You can see an animal of a certain size with short white fur, many black spots and a tail and know it is a dalmation without first having to recognise it as a dog. 

Because recognition for large sets is typically introduced as a hierarchical system of recognising, for example, CO then CP then EP in the case of ZBLL, doesn't mean that that's how you would actually recognise it in a solve. At first you do spend more time recognising a case, but with practice you start to notice the features in a different way and recall becomes very fast. I find it no more difficult to recall algorithms now than I did when I only knew 5 algorithms. 

I think it's also worth mentioning that the ZBLLs I have chosen to learn are those that I started noticing without putting in any extra effort whatsoever. They all have features that I was noticing during the course of my normal COLL recognition. That means that I did not put in any effort to learn to recognise the ZBLLs that I know. I just had to spend 2 minutes learning the algorithm, and a week or two building up the connections between the case and the algorithm in my brain. In other words, larger algorithm sets do not necessarily equate to lower recognition times or more brain processing power. 

Don't forget, your brain is optimised to an incredible degree for building connections and then recalling them. You've probably got a vocabulary of tens of thousands of words, with each one connected to a meaning, appropriate usage, etc, and your brain has no problem quickly pulling up the information it needs. Compared to that, a few hundred algorithms is small fry.

The big challenge with large algorithm sets is

Each alg requires practice to instil in muscle memory so that you can perform it fast. 
You have to take the time to build the connection between the case and the alg.
You have to practice regularly to maintain both of these. 
All of these problems are easily solved if you have plenty of time to put in, and only really present a challenge if your practice time is limited.


----------



## mDiPalma (Dec 7, 2016)

FiWiFaKi said:


> Versus CFOP: LS (3.7 moves) + OLL (9.7 moves) + PLL (11.8 moves)... So CFOP LS+LL is 25.2~ moves.
> ...
> CFOP: 78 algs at 25.2 moves



not that it matters, but:
cfop last pair is ~6.7 moves - not 3.7. you are also forgetting to add .75 moves between each step for AUF.

>> cfop LS & LL is actually >31 moves (assuming you use optimal oll and pll, which nobody does).

you're also discounting methods like Snyder's LL (fish & chips) and commutator-based LS/LL approaches derived from Heise which blow methods of comparable alg/case count out of the water with regards to efficiency.


----------



## Y2k1 (Dec 7, 2016)

mDiPalma said:


> not that it matters, but:
> cfop last pair is ~6.7 moves - not 3.7. you are also forgetting to add .75 moves between each step for AUF.
> 
> >> cfop LS & LL is actually >31 moves (assuming you use optimal oll and pll, which nobody does).
> ...


Random question what is snyder's method?


----------



## shadowslice e (Dec 7, 2016)

Y2k1 said:


> Random question what is snyder's method?


Snyder2 is essentially FreeFOP with a different LL method. Anthony Snyder proposed it a while back and claimed it to be far superior to any methods ever invented (though he claims to have been the fastest in the world until 2004 or something like that but could never go to a comp because everyone was "organising things behind his back" and never invited him because they were jealous and though he was too good). You can read about it on his website snydermind if it hasn't been dropped and stuff but I warn you he does exaggerate a lot though to be fair it is not a bad method and definitely sub-10able of you put enough practise in.

There is also the hallowed "snyder3" method which he talks about as the logical progression of this method and will always give movecounts of under 30 moves thanks to "some innovations and new concepts he came up with" and will from there be completely computer generated and takes months for "very high aptitude" people to learn but he said he was going to publish a book which would instruct people how to do it and give anyone who read the book "all the tools to get world class times" once they had finished and perfected his method.


----------



## Y2k1 (Dec 7, 2016)

shadowslice e said:


> Snyder2 is essentially FreeFOP with a different LL method. Anthony Snyder proposed it a while back and claimed it to be far superior to any methods ever invented (though he claims to have been the fastest in the world until 2004 or something like that but could never go to a comp because everyone was "organising things behind his back" and never invited him because they were jealous and though he was too good). You can read about it on his website snydermind if it hasn't been dropped and stuff but I warn you he does exaggerate a lot though to be fair it is not a bad method and definitely sub-10able of you put enough practise in.
> 
> There is also the hallowed "snyder3" method which he talks about as the logical progression of this method and will always give movecounts of under 30 moves thanks to "some innovations and new concepts he came up with" and will from there be completely computer generated and takes months for "very high aptitude" people to learn but he said he was going to publish a book which would instruct people how to do it and give anyone who read the book "all the tools to get world class times" once they had finished and perfected his method.


Umm... Wow. Thats all I've to say. What is the ll exactly?

Thanks for the description btw.


----------



## mDiPalma (Dec 7, 2016)

Y2k1 said:


> Umm... Wow. Thats all I've to say. What is the ll exactly?



doing an alg (fish) followed by a corner commutator (chips).

https://www.speedsolving.com/forum/threads/snyder-method-fish-v2-step-fish-step-ver-2.37503/

this doesn't fall into the OP's categorization because it doesn't solve an entire 'phase' at once. Yet in doing so, it outperforms most other methods. The "mathematical advantage" that Snyder likes to tout is legitimate.


----------



## Weston (Dec 8, 2016)

4Chan said:


> Haha, you remind me of Liquid KiwiKaki
> You're from Canada too!


You scrub. KiwiKaki was never on liquid.


----------



## 4Chan (Dec 8, 2016)

Weston said:


> You scrub. KiwiKaki was never on liquid.


I WAS TRICKED BY THE AVATAR OH NO
AHHHHHH


----------



## mitja (Dec 8, 2016)

this is becoming quite an interesting discussion, and it will probably bring us nowhere. I think, if you want to be fast, it is all muscle memory and TPS on the end. I believe CFOP is so popular as you need the least time to make most of the stages into muscle memory. What I see is, that people get faster when they bring last layer into muscle memory and even faster when they do it with F2L and the cross. 
When does intuitive F2L become unintuitive? When you do it almost like a reflex movement. And the fastest speedcubers do just that.
Most alternative methods need much more rehearsal and time to become muscle memory.
Well, those Snyder solves that he does on his (horrible)webpage look amazing. I can just imagine how fast it could be if he could do it fast like Felix. But then it would not be described intuitive anymore. Ultimately in the future somebody will just do it in "gods" number, that would be the fastest. How much do they need for 20 or less moves? 1-2 sec?
For so called very intuitive methods you need time, years, maybe decades. But then you get older, slower with muscle memory. 
So what are you searching for? The fastest way to become world level speedcuber? Probably CFOP? Maybe roux? And definitely, stay young. I can see on older cubers thread, guys with years ( I am also 49) can be pretty fast, like Mark above. But for real TPS (like Felix's 12) it is better to be young. 
Just practise, you will feel when you start with muscle memory, when you start with lookahead and when F2L( or block building stage in other methods) doesn't feel intuitive any more.
Somebody complained over the words:
"do solves"
"look ahead"
"learn that other alg set"
"do this thing blind"
Yeah it sounds boring, but it is 100% true


----------



## obelisk477 (Dec 8, 2016)

FiWiFaKi said:


> TLDR: All resources expanding on full CFOP are all over the place, with no unanimous consensus on the way to go forward.



It doesn't really matter if there is a consensus on a way to go forward since there actually *is* a way to go forward. I think your discussion would be useful if we were all stuck and not really improving, as if there were some inherent roadblock in CFOP that kept people from getting better. But that just isn't the case. People just keep getting better, including Feliks.

tbh, what OP reminds me of is just an intelligent and well worded version of the usual noob question 'what do i learn after OLL and PLL to get faster'. But no matter how well you state the question, the answer is always the same -- practice. Practice is the way forward with CFOP, and that has served everyone pretty well so far who has just kept practicing.


----------

