# Solving Methods; A Primer



## Kirjava (May 9, 2013)

This thread will attempt to give an extensive introduction to
properties different methods have. I will also give my thoughts on
each.

As well as people developing methods or considering new techniques, I
hope this may help the way you perform or think about regular
speedsolving.

Basics

Solving methods consist of any number of steps which contain
instructions detailing how to bring the cube from one state to
another. In 'good' methods, each step will attempt to take maximum
advantage of the state of the cube. This makes analysis of first steps
quite interesting - with so much freedom it makes sense that no
specific step seems to be much more useful than alternatives. A step
may accomplish more than one thing at once.

Method definitions often break up one step into many smaller,
systematic, and human sized chunks. These chunks are essentially part
of the same step. "blockbuild stuff" then "blockbuild more stuff"
becomes "blockbuild a lot of stuff". It can be advantageous for a 
human to be aware of these abstractions and ignore them.

On the point of steps starting to blend into each other, it's worth
noting that it's difficult to exactly define the term 'method'. Minor
variations give rise to different, yet closely related systems.
Classifying these systems as completely new methods feels needless,
yet it is difficult to draw a line.

Something that seems unpopular these days is a 'multi-system' (as
described by Singmaster). This is where a method will 'fork' at a
certain step and finish in different ways depending on the direction
forked. The direction chosen to fork in is based on which can take the
best advantage of the current situation, but either can be chosen if
desired. Examples of this technique can be demonstrated by creating
hybrid methods which will allow forking between the two at certain
stages. Multiple forks in the same method are not feasible/sensible
imo. This idea can be seen when using separate algs for special cases
to achieve more than the original intention of the step.

This is closely related to being 'method neutral' - being closely
proficient enough at multiple methods to allow you to choose the
method when starting the solve based on the scramble. The difficulty
in obtaining this skill for the solver requires either to initially
learn two methods at once and practise both, or to practise one method
and at a later date attempt to practise another and bring it up to the
proficiency of the first. The problems with the latter technique are
similar to that of switching to colour neutral.

Step method order may not be fixed. Methods designed for blindsolving
can have this attribute - because of the nature of the method, where
each algorithm (mostly) leaves the rest of the cube alone, the order
is less important and changing it can be trivial.

Solving Types

In this section, I will analyse the types of technique used to
complete a step. This can mean two different things (that have their
own categories) - the human approach to completing the step and what
type of state change has taken place during the step.

There are a few systems that humans use to accomplish steps. The most
apparent one being the algorithmic approach to something. Called "the
worst thing in speedcubing" by Gilles Roux, this system enumerates the
cases in a given step and lists solutions for each. The solver will
then learn a solution for each case so that when it appears in a solve
they can recognise and recall the solution from memory.

An algorithmic approach can be used in steps that are traditionally
seen as intuitive based, but this is not generally advised - it
usually entails using some basic tricks or rules to reach a state that
you have an algorithm for. This is seen to generally lead to longer
solutions, but that does not make it worse.

An intuitive approach to solving a case is arrived at by logic and
reasoning that has been generated by experimentation. I think the
ability to use intuition to generate solutions for something is
important - it demonstrates awareness of the effect caused by applying
moves which is an important trait in understanding the cube and
shortcuts available. In comparison to algorithmic approaches, a
solution for each case is discovered by the user using logic or trial
and error. Cases may be improved over time as the solver's ability to
intuit more efficient solutions improves.

Many steps can be approached in either an algorithmic or intuitive
way, but some may require either one of the other to be used.
Something of note is that after a while, whichever approach you take
appears to converge towards the other. That is to say that intuitive
approaches to cases begin to become second nature and are executed in
the same manner as algorithms, while the solver can gather an
understanding of how an algorithm works the more familiar he is with
it. In situations where either approach can be used, the labels
intuitive/algorithmic describe the learning process - as the end
result is essentially the same.

A learning system that doesn't quite fit into either category is a
rule based one. This consists of specific things to perform for one
of many cases. There will be different groups of cases that require
the same thing to be performed to advance to the next part - which 
will either be another case from a category, or completion of the
step. Examples of this are the edge orientation step from the Roux
method, dual alg single step systems for LL (Petrus 270, OLLCP hax),
or some cubeshape systems for Square-1. I like to call these systems
'flowcharting systems' as the data can be expressed as a flowchart.

Another rule based system involves using a formula to generate move 
sequences. The only examples I can think of that do this are
commutators and conjugates. This involves using a formula to create an
algorithm to perform a certain task. While intuition is required to
create the different parts of the formula, the structure of the
algorithm created is predefined. This appears to result in a system
that allows creation of predictable results that can be used
'on the fly' without premeditative learning.

As with other systems, these will eventually become algorithmic *and*
intuitive.

There are two different state changes that a step can accomplish.

Direct solving is the most prevalent and *has* to feature in a method
at some point for a cube to be solved. With this technique, pieces 
dictated by the step are fully permuted and oriented. With respect to
a given reference. This does not have to be an external reference
point, and does neither have to be the same reference point as in
previous steps (pseudoblocks).

Quoting Quadrescence: "I think if you are directly solving, you have
an absolute reference point throughout the process". While this 
contradicts what I am saying, I see it as a perfectly valid view on 
what constitutes direct solving. This is a good example of the grey 
area seen when describing these concepts, which I have had trouble
with when writing this article.

Reduction is not a requirement for a method, but can be a powerful way
to bring a cube closer to solved. When a step involves a reduction, 
there is a subset of states that the cube is required to be in for
completion of the step. The step after will only deal with solving
those states, so removal of others facilitates completion of the step.
This also further reduces the number of states a cube can be in, 
bringing it closer to solved. While this is also true for direct
solving, the required states do not require pieces to be permuted or
oriented correctly. Certain attributes can instead be enforced that 
involve *only* permutation or orientation, or pieces to be grouped in
certain ways.

Some steps contain a mixture of both types, like direct solving some 
pieces and reducing others.

Pseudo solving is an interesting quirk. This entails solving to a
state whereupon after normal conclusion of the method, extra moves
will be required to resolve the pseudo creation. So instead of solving
to a solved cube state, you are solving to the state of one (or more)
moves away and then adding those moves at the end. This can be 
advantageous because pseudoblocks may be easier to create than the 
normal blocks required. 

A pseudo move can be applied over the course of a few steps and can be
used with either direct solving or reduction.

Method Feasibility

Unfortunately, most metrics for measuring method feasibility cannot be
quantified easily. Along with personal preference, this makes
comparing different systems a mostly subjective ordeal.

A major aspect of what constitutes a good method is the movecount. 
While this is not the quintessential element in deciding how well a
method will perform, it is a purely quantifiable attribute that gives
a good indication. All major speedcubing methods in use have a similar
movecount, with ones giving a higher movecount providing advantages in
other areas.

Because of the human element to speedsolving, ease of use is another
key factor. Methods that are without complication tend to fare better
as they are more attractive to learn and receive more testing. While
more complicated methods may perform better in the long run, 
demonstrating this fact proves difficult when no one desires to try.

Methods with complicated steps also require extra thinking time, which
should not be present when speedsolving. A disadvantage in many
methods with low movecounts is the thinking time required to execute
certain steps.

Another ease of use issue is the required learning overhead. This 
usually means the quantity of algorithms required to be learnt in 
order to fully utilise a method. Methods with less algorithms tend to
rely on using more steps and intuition, leaving a less efficient 
overall solve, while methods with algorithms numbering into the 
hundreds give a steep learning curve and difficult implementation. 

After initial learning of a large algorithm set, this difficulty
results from the trouble caused by attempting to recall an algorithm
from such a large subset without thinking time. While recognition of a
case from a large subset is also an issue, it is not as significant as
recall. So far, no one has been able to show proficiency with an
algorithm set numbering in the hundreds matching that of lower sizes.

As for the algorithms themselves, subsets requiring algorithms do not
usually restrict to a certain movegroup - so users can find different
to suit their preferences as they please. However, more intuition
based steps tend to lend themselves to being executed with certain
movegroups. Steps with fingertrick friendly movegroups generally fare
better than others - but this aspect is largely affected by personal
preference. 

Extensions & Variations

Each base method can have countless variations and extensions.
Usually, a step or group of steps can be accomplished with multiple
systems that are of seemingly near equal feasibility. There are almost
endless combinations of alternative approaches to things.

Forced step concatenation is a popular idea for extending a method.
This involves learning how to solve two steps at once, usually 
resulting in a huge increase of things to learn. When creating a 
method, this should be performed on each couple of steps to ensure
maximum efficiency is produced. However, this is not always humanly 
feasible to do.

A similar idea is to solve only part of the next step instead. This
gives you a subset of cases for the next step. The intention would be
to make this subset of cases be nicer than the larger set is. It's 
debatable if the time saved by having better cases is negated by
getting to that subset in the first place.

Another use for partial concatenation is to reduce the number of cases
enough so that full step concatenation can take place with the reduced
step and the step after that.

Alongside different variations, systems exist that only apply in
certain situations. While these systems provide advantages in these
situations, they are not useful for every cube configuration and are
not considered complete systems. While learning tricks for magical
special cases will usually be advantagous, implementing them in solves
can introduce recognition or recall issues. Tricks like these usually 
entail forcing skips of some kind.

New Method Development

People creating methods solely for entertainment purposes can skip
this section (and indeed the entire thread). 

Researching existing methods is extremely important to the development
of your own. Not only will this cause you not to regurgitate methods 
that we have seen over and over again, it will give you ideas of your
own. You will be able to see what works and what does not work as you 
can see what is popular and used to attain good results already.

Most techniques have remained largely unexplored due to the difficulty
of testing new ideas. Do not discount them simply because they are not
popular - there are good examples of systems with low usage going to
achieve better results than their popular counterparts. Bearing this
in mind however, you should learn to be able to judge if a system has
outright bad attributes that make it undesireable for use. Bearing
_that_ in mind, consider that your judgement may be wrong 

Alternatives to existing steps are a good place to start, as these
currently have the highest opportunity for producting completely 
viable results. This doesn't give a very interesting outcome though as
the alternative will often be similar results wise as the existing
system, with less documentation and use. 

These days it seems that most trivial and obvious full methods for 
well known puzzles have been completely covered. Creating brand new
ideas or improving existing ones seems to have to rely on really
abstract concepts.

You should be aware that most of your ideas will not work or will be
unsuitable for speedsolving. Be prepared to let go of an idea if it is
bad - remaining objective is key.

TL;DR METHODS!!!!!!11


----------



## Smiles (May 9, 2013)

great job even though i skipped to tldr it was very informative. METHODS


----------



## TheLizardWizard (May 9, 2013)

A nice read; this should be the go-to post to show now when people "invent" the belt method or keyhole F2L


----------



## stoic (May 9, 2013)

Great read, thanks Kir


----------



## pipkiksass (May 9, 2013)

That's why I love the speedsolving community - some very erudite and intelligent individuals! A great piece - very informative and well written.


----------



## MaeLSTRoM (May 9, 2013)

Really Great post kir. Nice to see you can be constructive as well at times :b
Want to fix the formatting though? Its only using a very narrow column of the space, which makes it quite hard to read in places.


----------



## Noahaha (May 9, 2013)

Cool stuff.

One thing you might want to mention is that learning 100+ algorithms is only infeasible if using a purely algorithmic approach. I'm sure plenty of people know well over 100 "algs" for F2L and over 1000 for 3-style 3BLD.


----------



## mark49152 (May 9, 2013)

Interesting stuff, thanks. It would be great if there were more examples (lots of examples).

The idea of a multi-system that forks at certain points is one I like. It doesn't really make sense that the same sequence of steps is always going to be the most efficient way to complete a solve. In that sense, the comment you made about algorithms versus intuitive also applies to methods; a method is also an "algorithm" and although learning rigid method(s) is a good place to start, the more you learn the more it converges on the intuitive and you can become more flexible, mixing and matching steps according to how the solve is going.


----------



## bobthegiraffemonkey (May 9, 2013)

Good post, I read it all even though I only make methods for fun, then actually use them for some reason.

I agree with Noah, BLD solvers tend to use lots of 'algs', but since these are mostly intuitive and are based on a far smaller number of patterns, it isn't really as many algs as it appears to be. Quoting several hundred algs for full BH is just misleading. Anyway, can a method with similar alg properties to BLD be made for sighted solving? Just an idea I had there. Hmm, I guess it's basically like your 1LLL system with an ideal system for deciding which alg to use first. Oh well, maybe that wasn't a new idea.


----------



## elrog (May 10, 2013)

Great post. You explain things very well, so even if I didn't learn anything I didn't already know, I can still see this beng useful to newcomers. I agree with the section about how algorithms become intuitive and intuivie things become algorithms. Something cool you could add is a brief overview of each of the most popular methods for various types of solving such as FMC, speedsolving, and BLD. It could provide a guide to beinners to help them decide what method they want to learn.


----------



## Kirjava (May 10, 2013)

Noahaha said:


> One thing you might want to mention is that learning 100+ algorithms is only infeasible if using a purely algorithmic approach. I'm sure plenty of people know well over 100 "algs" for F2L and over 1000 for 3-style 3BLD.



I didn't say they were unfeasible, just that they 'give a steep learning curve and difficult implementation'. 

It's true that no one has yet shown proficiency with large alg sets that *require* an algorithmic approach, but that doesn't mean it's unfeasible.



mark49152 said:


> The idea of a multi-system that forks at certain points is one I like. It doesn't really make sense that the same sequence of steps is always going to be the most efficient way to complete a solve.



I think it's not likely to produce good results. Maximising efficiency can lead to a decrease in overall speed.



mark49152 said:


> In that sense, the comment you made about algorithms versus intuitive also applies to methods; a method is also an "algorithm" and although learning rigid method(s) is a good place to start, the more you learn the more it converges on the intuitive and you can become more flexible, mixing and matching steps according to how the solve is going.



It doesn't 'converge' on intuitive. I think mixing up entire steps is a bad idea and has always been problematic, but adapting to the current situation and adding influence to improve the solve within the original guidelines is a fine adaption. Changing entire steps adds thinking time and requires you to be well versed in multiple disciplines equally, dividing time between them that you could spend getting much better with one.

But y'know, this is just my conjecture. Would be very happy to be proven wrong.


----------



## mark49152 (May 10, 2013)

Do you consider it a "multi-system" to choose between OCLL/PLL and COLL/EPLL depending on the case that comes up?


----------



## Kirjava (May 10, 2013)

No, as I said above it's a method extension - EPLL is a subset of PLL - it's partial step concatenation. A 'multi-system' would require the OCLL/PLL alternative to produce non-PLL cases after the first algorithm.


----------



## mark49152 (May 10, 2013)

So CLL/ELL?


----------



## Kirjava (May 10, 2013)

Yes.

'multi-systems' are just systems that have an aspect of solution system neutralness in one or more steps. This is distinct from method neutralness in that these systems have well defined steps that are required to be adhered to, while method neutal solving can have completely different techniques.

One thing I didn't really touch on is meta-methods. While similar to multiple system based solving, it doesn't involve any kind of forking. It is a method that has well defined requirements for completed steps, but doesn't require a specific system to be used to complete any of them.

Meta-methods are good for describing basic rules for different families of related methods.


----------



## Petro Leum (May 10, 2013)

great post, agree with all if it; nicely structured as well!

loved the idea of "forking" especially. gotta think about that some more.


----------



## mark49152 (May 10, 2013)

Kirjava said:


> 'multi-systems' are just systems that have an aspect of solution system neutralness in one or more steps. This is distinct from method neutralness in that these systems have well defined steps that are required to be adhered to, while method neutal solving can have completely different techniques.


OK. If I understand you correctly:-

F2L/OLL/PLL is a different method to F2L/CLL/ELL because the intent is to proceed via a different intermediate state;
F2L/EO/COLL/EPLL is the same method as F2L/OLL/PLL because COLL subsumes part of the PLL task and the remaining EPLL is a subset of PLL;
F2L/EO/COLL/EPLL is a different method than F2L/CLL/ELL even though EPLL is a subset of ELL, because it requires that EO is completed beforehand and preserved during CP, thus changing the sequence of steps;
If I complete F2L then make a choice between OLL/PLL and COLL/EPLL based on whether EO happens by chance, that is NOT a multi-system;
If I complete F2L and make a choice between CLL/ELL and COLL/EPLL based on whether EO happens by chance, that IS a multi-system.


I can't really comment on terminology, as that is whatever it is defined as, and your definition is clear enough; but the distinction between the last two seems somewhat academic. Ultimately, in both cases I set out with a consistent plan to solve the cube, which involves at a certain intermediate state "forking" by making a choice of which is the most expedient out of multiple possible next steps. That seems to be the strongest characteristic of the plan, rather than categorising it by the properties of the available routes. 

Do the following all fit your definition of extensions (to CFOP) as opposed to being different methods?

Xcross
Partial edge control or ZBLS
EJLS
SuneOLL
CPLS


----------



## MaeLSTRoM (May 10, 2013)

mark49152 said:


> ... stuff ...



Aaaaand I think this is about the point when meta-methods become what you're referring to. So the extensions you listed at the end would be part of the CFOP meta-method but not CFOP as a method in itself.


----------



## mark49152 (May 10, 2013)

Kirjava said:


> One thing I didn't really touch on is meta-methods. While similar to multiple system based solving, *it doesn't involve any kind of forking*.





MaeLSTRoM said:


> Aaaaand I think this is about the point when meta-methods become what you're referring to. So the extensions you listed at the end would be part of the CFOP meta-method but not CFOP as a method in itself.


Most of the extensions might fit the definition, but the COLL/CLL forking thing doesn't.


----------



## Kirjava (May 10, 2013)

You should read the paragraph I wrote before the one on multi-systems again.


----------



## mark49152 (May 10, 2013)

Kirjava said:


> You should read the paragraph I wrote before the one on multi-systems again.


That's a cop-out - a couple of posts ago you were trying to draw a line, now you're referring me back to your earlier comment that it's hard to draw a line  

Anyway, I agree with that paragraph. 

You didn't take the bait with CPLS in my "extensions" list. Obviously it changes the usual CFOP step order, meaning that by your definition using CPLS would be a different method not an extension.


----------



## Kirjava (May 10, 2013)

Seems like you're trying to 'catch me out' or something. It's not always easy to fit things into the exact definitions I defined. Many things can fit into multiple ones.

CPLS is a different method because it has different steps. CPLS is an extension because it is a type of partial step concatenation.

Shades of grey exist in the world.


----------



## mark49152 (May 10, 2013)

Kirjava said:


> Seems like you're trying to 'catch me out' or something.


Well not really - I was just challenging your earlier yes/no answers about COLL/EPLL and CLL/ELL. I agree that it's shades of grey, that's kind of my point. 

The forking/multisystem thing is really interesting as a general concept, regardless of whether the different forked paths qualify as the same or different methods.


----------



## elrog (May 12, 2013)

I don't see why everyone thinks forking is such a great idea. It wouldn't be good at all for speedsolving. You have to recognize 2 different approaches before you do eather one. Even if you get a better alg to preform, you used more time inspecting. Also, if a solve ends up having a really good case for the one path someone else does, you using a fork to check for that other path would slow you down. Also, having more algs means you won't get as proficient at preforming all of them, it's a pain to memorize extra algs, and it makes you have longer recall time.

I can see how learning an extra fork to a method would help in FMC, but I'm still not convinced that it is worth it. If your going to learn alot of algorithms for different forks, just learn ZBF2L/ZZLL or something like that.


----------



## CubicNL (May 12, 2013)

Thank you for writing these things down in such an orderly fashion, finally a clear piece on this topic.
I agree with most of the things you brought forward. With regard to the intuitive/algorithmic part I think that the future of speedsolving lies in intuivity and that that ultimately will show up as the key to speed.
Lastly I'm not sure whether beginners in method developement should extensively study existing methods and ideas. I certainly think that it prevents you from making lots of mistakes, but it may just pigeonhole the person having the one innovative idea into the existing ideas. My call would be that we do indeed have powerful method structures (with my personal favourite being roux), but maybe that's just because I'm pigeonholed as well.


----------



## Kirjava (May 13, 2013)

elrog said:


> I don't see why everyone thinks forking is such a great idea. It wouldn't be good at all for speedsolving. You have to recognize 2 different approaches before you do eather one. Even if you get a better alg to preform, you used more time inspecting. Also, if a solve ends up having a really good case for the one path someone else does, you using a fork to check for that other path would slow you down. Also, having more algs means you won't get as proficient at preforming all of them, it's a pain to memorize extra algs, and it makes you have longer recall time.



All these things apply to OLLCP and CMLL+EO too, and people have no issue using these to improve their technique.

There are good arguments against forking to a different system, but you are not making them. 



CubicNL said:


> With regard to the intuitive/algorithmic part I think that the future of speedsolving lies in intuivity and that that ultimately will show up as the key to speed.



Really? I'm almost convinced that brainlessness is the key to speedsolving. Intuition will allow you to find good shortcuts, but that needs to be executed algorithmically when solving to achieve an advantage.



CubicNL said:


> Lastly I'm not sure whether beginners in method developement should extensively study existing methods and ideas. I certainly think that it prevents you from making lots of mistakes, but it may just pigeonhole the person having the one innovative idea into the existing ideas.



I see what you're trying to say, but it's a bit silly. If people do not know what methods exist, they are doomed to rediscover them.

Surely having wider knowledge won't hamper your creativity?


----------



## MaeLSTRoM (May 13, 2013)

Kirjava said:


> Surely having wider knowledge won't hamper your creativity?



Potentially more to the point, if it does then you're not having original ideas :b


----------



## CubicNL (May 13, 2013)

Kirjava said:


> Really? I'm almost convinced that brainlessness is the key to speedsolving. Intuition will allow you to find good shortcuts, but that needs to be executed algorithmically when solving to achieve an advantage.



Yes, algorithmic execution is the fastest, but only if it comes from intuition. And I'm just doing a wild guess for the future, who knows what will be possible in 50 years..




Kirjava said:


> I see what you're trying to say, but it's a bit silly. If people do not know what methods exist, they are doomed to rediscover them.
> 
> Surely having wider knowledge won't hamper your creativity?



If people continue to rediscover them then we probably have the best ideas already.
And I guess, or actually hope that you're right about more knowledge not hampering creativity. After all we can't really know the answer to that unless someone comes up with something.


----------



## jayefbe (May 13, 2013)

CubicNL said:


> Lastly I'm not sure whether beginners in method developement should extensively study existing methods and ideas. I certainly think that it prevents you from making lots of mistakes, but it may just pigeonhole the person having the one innovative idea into the existing ideas.



I call this the "I don't want to learn music theory because I won't be as creative" argument. Being a guy who used to play in bands through my teens and early twenties, I heard this argument many many times. Invariably it came from someone incredibly lazy, who had merely rationalized his laziness. I don't agree with it. A truly creative person is able to integrate what has been done before, and is then able to apply that knowledge in innovative ways. 

And the "rediscovery" of the same methods over and over again doesn't necessarily mean that these are the "best" methods. More likely, they're the simplest and most obvious ones. Not necessarily mutually exclusive, but I wouldn't use that as an argument supporting how good an idea or method is.


----------



## CubicNL (May 13, 2013)

jayefbe said:


> I call this the "I don't want to learn music theory because I won't be as creative" argument. Being a guy who used to play in bands through my teens and early twenties, I heard this argument many many times. Invariably it came from someone incredibly lazy, who had merely rationalized his laziness. I don't agree with it. A truly creative person is able to integrate what has been done before, and is then able to apply that knowledge in innovative ways.
> 
> And the "rediscovery" of the same methods over and over again doesn't necessarily mean that these are the "best" methods. More likely, they're the simplest and most obvious ones. Not necessarily mutually exclusive, but I wouldn't use that as an argument supporting how good an idea or method is.



I think the comparison between music and cubing is not completely correct here. Finding a truly new concept for a speedsolving method could, if something like that is ever found, mean that that concept goes in against the established ideas. Being creative in music is usually only hailed if you are creative somewhat between the boundaries of the established genres/musical language. Also we're talking about someone who would be a pioneer in speedsolving and I don't think all music pioneers based themselves upon existing theory.
You must understand that I am not totally against exploring methods first, that seems like a completely logical thing to me, I just hope that some idea won't be held back.

About the rediscovery I must agree with you. But what I meant is that if a lot of people who think about the best ways to solve the cube can't come up with any better system and keep falling back on the existing ones, that those are probably pretty good. Of course not necessarily the best.


----------



## elrog (May 14, 2013)

Kirjava said:


> All these things apply to OLLCP and CMLL+EO too, and people have no issue using these to improve their technique.
> 
> There are good arguments against forking to a different system, but you are not making them.
> 
> ...



Every arguement I made is a valid arguement. Is the arguement that you take 2 times the inspection time not a strong arguement?

I don't beleive that the furute to speedsolving lies in algorithms. Algorithms have basically been used to their maximum potential now and intuition still has room for improvement now matter how good you are. There is just so much more potential in intuitive solving.

If your thinking your going to somehow come up with better algorithms for a new, better step, I'm just telling you now that it probably won't happen. The only way to make an algorithmic solve better is if you couldsolve more with a single algorithm, but that requires alot of algorithms hindering you to master and recall every algorithm quickly.


----------



## mark49152 (May 14, 2013)

Intuitive solving reduces move count but also reduces tps. Looking at stats for CFOP solvers, F2L is always slower than LL. I would guess the same is true for Roux's steps. 

Since solve time depends purely on move count and tps, logic suggests that a good method requires a mix of both algorithmic (to get big chunks of solving work done at high tps where intuition would reduce tps too much) and intuitive (to save moves where use of algorithms would push the move count too high or reduce overall tps through poor recognition or recall). 

I believe the ideal method would find the sweet spot between the two. That's what today's methods look like too. 

Forking is interesting because it might provide a way for algorithmic steps to become more efficient. For example, some OLL algs save significant move count over 2-look and are fast, but others save relatively few moves and of course receive much less practice than the 2-look algs. Therefore it doesn't make sense to think of full OLL as a monolithic set that must always be learned in its entirety. For some of those OLL cases, there might be an alternative solving route that is faster, and I don't see why recognising and recalling that step would require any more time than recalling the OLL alg for a case.


----------



## Kirjava (May 14, 2013)

elrog said:


> Every arguement I made is a valid arguement. Is the arguement that you take 2 times the inspection time not a strong arguement?



It would be if it was true.

Recognition is not even close to being the overhead that people assume it is. I simply can't accept this argument after going from recognising one step to recognising two in my main speedsolving method without any kind of issues at all.



elrog said:


> I don't beleive that the furute to speedsolving lies in algorithms. Algorithms have basically been used to their maximum potential now and intuition still has room for improvement now matter how good you are. There is just so much more potential in intuitive solving.



2L1A step concat is an algorithmic improvement that hasn't been used yet. Others are sure to exist. Making assumptions about 'maximum potential' and what is possible is a bad idea.

I don't think there much substance to what you're saying. Intuitive solving is only as good as the human executing it. It needs to become brainless to reach any kind of decent speed.

This is not a 'algorithm' vs. 'intuition' argument. Obviously both have their places. Saying one is oh so much better than another is just silly.



mark49152 said:


> a good method requires a mix of both algorithmic (to get big chunks of solving work done at high tps where intuition would reduce tps too much) and intuitive (to save moves where use of algorithms would push the move count too high or reduce overall tps through poor recognition or recall).



thumbs up


----------



## elrog (May 15, 2013)

Kirjava said:


> It would be if it was true.
> 
> Recognition is not even close to being the overhead that people assume it is. I simply can't accept this argument after going from recognising one step to recognising two in my main speedsolving method without any kind of issues at all.
> 
> ...



I don't see how it is possible that you wouldn't take more recognition time to recognize 2 substeps rather than one. It is only possible that recognition time wouldn't matter if you recognize both before you even get to the step, which isn't always even done with only one substep.

I think you meant 1L2A rather than 2L1A. That is the system that you were working on. I think that you will have lowered recognition time due to the fact that you recognize both algorithms with 1 look, but it wont be cut in half. What has me thinking that this might not be the best idea is that you have to know every position the last layer can be in which I can see leading to major recall problems. This is the same reason that no extremely large algorithmic steps have been proven to be worth it yet.

It would also be possible to look at the last layer and predict the PLL after doing the OLL by learning each last layer case. I think it is pretty genius to use a set of unrelated algs that still is able to cover every case.

I also agree that you need the right balance of intuition and algorithms, but I think people are moving farther to the algorithmic side because its easier.


----------



## BaMiao (May 15, 2013)

elrog said:


> I don't see how it is possible that you wouldn't take more recognition time to recognize 2 substeps rather than one. It is only possible that recognition time wouldn't matter if you recognize both before you even get to the step, which isn't always even done with only one substep.



It's totally possible to use forking without sacrificing recognition time. I think it would be akin to when you spot two potential pairs during an f2l, and deciding which one to tackle first. It isn't that difficult.

For example, say you start with a 2x2x1 block, Petrus-style. The next thing you see determines whether to extend that to a 2x2x2 and continue with Petrus, or extend to 3x2x1 and continue with a Roux solve. Or, you can spot an opportunity to construct pseudo-blocks, etc.

The key is in going with what the cube gives you. You don't have to recognize every possibility, the same as how you don't have to look for every f2l pair after finishing the cross, and then deciding which one is most efficient.


----------



## Kirjava (May 16, 2013)

elrog said:


> I don't see how it is possible that you wouldn't take more recognition time to recognize 2 substeps rather than one. It is only possible that recognition time wouldn't matter if you recognize both before you even get to the step, which isn't always even done with only one substep.



I switched from recognising CMLL to recognising CMLL and EO a few years ago. I now recognise CMLL+EO faster than I ever did CMLL alone, so it has not become some kind of speed bottleneck that you seem to assume it would - and the recognition time has certainly not doubled.



elrog said:


> I think you meant 1L2A rather than 2L1A. That is the system that you were working on. I think that you will have lowered recognition time due to the fact that you recognize both algorithms with 1 look, but it wont be cut in half.



If you think that you recognise both algs with one look in my system, you don't understand the method.



elrog said:


> I also agree that you need the right balance of intuition and algorithms, but I think people are moving farther to the algorithmic side because its easier.



Can you explain why you think this is true or when people moved? Speedsolving method use has been mostly static in the last few years with the one exception being Roux, which itself is a shift towards intuitive based solving.



BaMiao said:


> The key is in going with what the cube gives you. You don't have to recognize every possibility, the same as how you don't have to look for every f2l pair after finishing the cross, and then deciding which one is most efficient.



Your post sums it up quite nicely. It won't cause any problems if you're only doing it when you see it.


----------



## Logiqx (Aug 17, 2013)

I wondered if I could draw a diagram showing the better known 3x3x3 methods, grouped into basic categories. The approach to categoristion is not directly related to the thought process during a solve and is simply based on whether there appears to be a bias towards solving edges or corners, blocks or layers.

This was just a little exercise for myself whilst reading about all of the methods listed in the wiki. Perhaps it is of no interest to anyone else but I enjoyed thinking it through with my limited knowledge of the methods!

The diagram below is without doubt an over-simplification of the many methods available but related methods have still been clustered together. On the flipside you may also see methods which don't seem particularly similar quite close together. I've shown some of the stronger links between methods with dotted lines.

The image itself - http://cubing.mikeg.me.uk/Methods.jpg

My approach to categorisation was done intuitively in most cases and consisted of 2 steps:

1) Choose the quartile - e.g. edge bias (edges solved when no more than half of corners solved) or corner bias (corners solved when no more than half of edges solved). If it's neither edge biased or corner biased then it is a simple call to whether it is primarily a block-building method or layer-by-layer / algorithmic method.

2) Choose one of 3 segments within the quartile. e.g. Block / Layer methods might solve all of the corners before finishing with edges. Corner / Edge methods might have a bias towards blocks or layers at the beginning of the solve.

Known variations of a given method will generally leave it in the same segment or move it into an adjact segment.

I said earlier that this was really just a little exercise for myself whilst reading about different methods. Thoughts / comments are welcome though!


----------



## irontwig (Aug 17, 2013)

Made something similar way back when:

http://www.speedsolving.com/forum/attachment.php?attachmentid=1391&d=1292924809


----------



## Cubenovice (Aug 17, 2013)

Logiqx said:


> The image itself - Methods.jpg



Nice effort.

But in my humbl opinion Human Thistlethwaite and Human Kociemba are not block biased.
Perhaps you could move them to the empty slice below Edge Bias and label them as something "EO / Reduction / Separation"


----------



## GaDiBo (Aug 17, 2013)

Why those pictures still don't have ECDU method?


----------



## Logiqx (Aug 17, 2013)

irontwig said:


> Made something similar way back when



That's not far off my original idea - plotting methods in two dimensions. I can't remember what made me move to the current idea of a circle, lol.



Cubenovice said:


> Nice effort.



Thanks!



Cubenovice said:


> But in my humbl opinion Human Thistlethwaite and Human Kociemba are not block biased.
> Perhaps you could move them to the empty slice below Edge Bias and label them as something "EO / Reduction / Separation"



That's a good idea. I wasn't happy with how I'd represented those two methods and almost put them in their own seperate bubble!



GaDiBo said:


> Why those pictures still don't have ECDU method?



I'm trying to keep it simple by listing a single belt method for now.


----------



## AHornbaker (Aug 22, 2013)

I think it would be cool to rate methods/variations on a certain scale. I already do it to some degree by looking at alg count, move count, number of looks, difficulty of recognition, and quickness of algs. It could fail and produce some obscure variation that looks good, but is terrible. On the other hand, it might force us to reconsider "good" methods and discover better ones that were overlooked. Could be a useful addition to the wiki pages when looking at methods/substeps. Thought this might be relevant to the thread. Cheers!


----------



## AvGalen (Aug 22, 2013)

GaDiBo said:


> Why those pictures still don't have ECDU method?





Logiqx said:


> ...I'm trying to keep it simple by listing a single belt method for now.



I love how all attempts to establish ECDU as a name are ignored and simply turned into "belt".
It reminds me of the trousers->pants chat-conversion


----------



## Kirjava (Aug 22, 2013)

Logiqx said:


> I wondered if I could draw a diagram showing the better known 3x3x3 methods, grouped into basic categories. The approach to categoristion is not directly related to the thought process during a solve and is simply based on whether there appears to be a bias towards solving edges or corners, blocks or layers.



Enjoyed looking at your content. I've wanted to create a taxonomy for methods for a while that shows relations between different systems, just haven't gotten to it. 

A venn diagram would work nicer possibly.


----------



## Logiqx (Aug 27, 2013)

Kirjava said:


> Enjoyed looking at your content. I've wanted to create a taxonomy for methods for a while that shows relations between different systems, just haven't gotten to it.



Cool, nice to know. Maybe a taxonomy would be a good addition to the wiki?

When I get a bit of slack time, I will see how a venn diagram looks. It will probably end up with less categories available and therefore lose some of the distinctions (e.g. block with corner bias = corners with block bias). Maybe that would be a good thing, maybe not... should be easy to tell once drawn!


----------



## davidmg90000 (Jan 25, 2014)

Im new, could somebody tell me all the methods used to solve a 4x4?


----------



## Ollie (Jan 25, 2014)

davidmg90000 said:


> Im new, could somebody tell me all the methods used to solve a 4x4?



http://www.speedsolving.com/wiki/index.php/Category:Big_Cube_methods

For future reference, most stuff can be found in the Speedsolving Wiki page


----------



## Methuselah96 (Jan 25, 2014)

Ollie said:


> http://www.speedsolving.com/wiki/index.php/Category:Big_Cube_methods
> 
> For future reference, most stuff can be found in the Speedsolving Wiki page



That is missing some more recent methods I think (I don't see Hoya or Yau).


----------



## Kirjava (Jan 25, 2014)

davidmg90000 said:


> Im new, could somebody tell me all the methods used to solve a 4x4?



Cage, Sandwich, K4, Yau, Hoya, Redux, Columns, ROAR


----------



## TDM (Jan 25, 2014)

Kirjava said:


> Cage, Sandwich, K4, Yau, Hoya, Redux, Columns, ROAR


What's ROAR?


----------



## Kirjava (Jan 25, 2014)

TDM said:


> What's ROAR?



Roux on a revenge. Something like 1x3x4 -> 1x3x4 -> CLL -> D centre -> centre centre edge columns -> ELL

Also I forgot meyer


----------



## bobthegiraffemonkey (Jan 25, 2014)

Also LBL and OBLBL ([optimised blockbuilding] layer by layer).


----------



## QPowerPrime (Dec 20, 2014)

I have recently started (with ZZ) not bothering about which edges I pair with the corners during F2L, and as I use COLL when I permute the edges I also permute the edges in the slots. There is only one case I cannot solve: 4 cycle of slots and adjacent edge swap on U layer. I got a 4.76 solve doing this.


----------



## mDiPalma (Dec 20, 2014)

QPowerPrime said:


> I have recently started (with ZZ) not bothering about which edges I pair with the corners during F2L, and as I use COLL when I permute the edges I also permute the edges in the slots. There is only one case I cannot solve: 4 cycle of slots and adjacent edge swap on U layer. I got a 4.76 solve doing this.



R2 D' F R2 E2 R2 F' U2 D' r2
and inverse

hopefully with these algs you can average sub 6


----------

