# Learning programs



## shadowslice e (Jan 27, 2017)

Has anyone ever tried to write a program which starts out with random moves to solve a cube, then select the most "successful" (defined probably by efficiency or maybe ergonomics and efficiency) run the program a few million times and see what the method it comes out is like?


----------



## AlphaSheep (Jan 27, 2017)

Not that I know of. What I have thought of is using a genetic algorithm approach to generate random methods (a set of steps each involving solving either orientation, permutation of some combination of corners, edges and/or centres with a substep of moves). For each step, it would generate pruning tables for that step and do a quick search, kind of like JARCS/HARCS for an optimal solution to that step. The next generation of methods is created by mutating the 2 or 3 most successful methods from the previous generation. I was curious to see what it would come up with. I only got about 5% of the way through implementing it though.


----------



## mDiPalma (Jan 27, 2017)

AlphaSheep said:


> For each step, it would generate pruning tables for that step and do a quick search, kind of like JARCS/HARCS for an optimal solution to that step. The next generation of methods is created by mutating the 2 or 3 most successful methods from the previous generation. I was curious to see what it would come up with. I only got about 5% of the way through implementing it though.



I've also thought a lot about this, and so did Tony Snyder.

With the new functionality of HARCS to support user-defined methods via text file input, this would only take an iterated use of its engine to implement. Though it would probably take a while to search the entire domain.


----------



## shadowslice e (Jan 27, 2017)

Oh cool, maybe in the next few years we could see actual runs of this and see what happens then. I feel like there needs to be a secind layer which condenses down the really small steps into bigger and bigger steps up until about 4-6 steps.


AlphaSheep said:


> Not that I know of. What I have thought of is using a genetic algorithm approach to generate random methods (a set of steps each involving solving either orientation, permutation of some combination of corners, edges and/or centres with a substep of moves). For each step, it would generate pruning tables for that step and do a quick search, kind of like JARCS/HARCS for an optimal solution to that step. The next generation of methods is created by mutating the 2 or 3 most successful methods from the previous generation. I was curious to see what it would come up with. I only got about 5% of the way through implementing it though.


What did the code look like?

E: Also, does anyone have any suggestions as to which code would be best to do something like this with?

I was thinking of learning haskell or something similar but I'm no expert.


----------



## Teoidus (Jan 30, 2017)

I'd probably use a python script that hooks up to HARCS to do all the actually computationally expensive stuff

I think it's a cool idea but one concern I had with it was defining a fitness function. In reality our intuitive concept of a "good" method is some f(movecount, ergonomics/regrip count, look count, alg count, recognition) and that can be hard to capture


----------



## shadowslice e (Jan 30, 2017)

Teoidus said:


> I'd probably use a python script that hooks up to HARCS to do all the actually computationally expensive stuff.


Yeah i was going to try to embed (not sure if that's the right word) harcs into it once the new update that lets you define solved has come out (though I still though it could be fun to write a similar program myself though it'll be nowhere new as polished (and i probably won't make the visual stuff)).



> I think it's a cool idea but one concern I had with it was defining a fitness function. In reality our intuitive concept of a "good" method is some f(movecount, ergonomics/regrip count, look count, alg count, recognition) and that can be hard to capture


Depending on how long it would take to run the method I might actually tweak the conditions to see what it comes out with. One other thing I was wondering was whether or not it would be better to start piece by piece in a random order then slowly group more and more steps together as they come up more frequently so that you end up with 3-6 "big" steps or to start with a group of human created methods (like Roux, CFOP, ZZ, Petrus, PCMS etc) and then mutate them to see what happens.


----------



## Teoidus (Jan 30, 2017)

shadowslice e said:


> Depending on how long it would take to run the method I might actually tweak the conditions to see what it comes out with.


You can also try making defining the fitness function itself w/ ML by "training" it on well-established methods (so it can know that Roux is better than LBL for example). Ofc you don't want to overfit then because then the actual program will just want to converge on an established method. Another idea is to try and "simulate" solves (so given a string of solution moves, calculate how many regrips/how awkward the moves are, how tough recognition would be, etc) and output a theoretical lower bound on solve time (but then you deal with things like max tps, defining "how complex" recognition can be, etc).

You can ofc just tweak the variables yourself like you said and see what the program comes up with, but I wonder if that would just end up being a program that generates methods you think are good and haven't thought of before, but not necessarily best methods



shadowslice e said:


> One other thing I was wondering was whether or not it would be better to start piece by piece in a random order then slowly group more and more steps together as they come up more frequently so that you end up with 3-6 "big" steps or to start with a group of human created methods (like Roux, CFOP, ZZ, Petrus, PCMS etc) and then mutate them to see what happens.



I think a more organic way to go about it would just be to make the merging of steps itself a type of "mutation"


----------

