# Representation theory of the Rubik' Group [Math and theophys guys gather]



## Sin-H (Jun 8, 2014)

Hey guys,

recently, I've been (quite randomly) wondering:

How does representation theory of the Rubik's Cube group look like? Have there been constructed nontrivial representations, are they irreducible, what are their dimensionalities?

Before I go into more detail, let's do a quick introduction to representations:

If G is a finite group and ρ: G -> GL(n,ℂ) is a group homomorphism mapping from G to the space of invertible linear transformations on some n-dimensional ℂ-vector space, then ρ is called a n-dimensional representation of G.

Basically, having a representation makes working with groups conceptually very easy because we are used to working with matrices and linear operators.

There is of course always the trivial representation, which maps the whole group to the identity transformation on ℂ^n, but that is not really an interesting one.

Now, a representation is called reducible if there exists a non-trivial invariant subspace U \subset ℂ^n so that ρ(U) \subset U and irreducible if {0} and ℂ^n are the only invariant subspaces.
The irreducible representations are the really interesting ones, and you can write any representation as a direct sum of their irreducible subrepresentations.

Now why not try to actually find a neat representation of the Rubik's Cube group? Or maybe even classify all its irreducible subrepresentation (probably way too much work to do)? I reckon it might turn out to be really hard to find such a thing, mainly because the Rubik's Cube group is kinda huge. There exist formulas which constrain the possible dimensionalities of irreducible representations, but there's still a lot of stuff that can happen.

I didn't really find any papers about this, although Axel tells me that Prof. Plesken of RWTH Aachen might have worked a bit on something related.
Now I don't know what the mathematicians and physicists in here are interested in, but maybe some are interested in this more abstract stuff.
I have lots of other stuff to work on this summer as well but it might be fun to try to mess around a little. It is probably not worth the effort, but hey. I just thought I'd share this thought with you and maybe it's worth a little discussion.


----------



## Christopher Mowla (Jun 8, 2014)

Very nice post (and very good question).

Is a sub-representation a position which can _represent_ millions of positions? (I'm not abstract algebra savvy, so I got lost in the language you used).


----------



## Sin-H (Jun 8, 2014)

If you have an invariant subspace, you can have a representation of your group that maps into the automorphisms of that subspace instead of ℂ^n. That is a subrepresentation (it is again a representation of your group). Now if your representation is reducible, then you can always write it as a direct sum of its irreducible subrepresentations - its matrices are block diagonal. This basically means that we have simplified the structure as much as possible.

The idea behind this is that we want to model groups via actions on a vector space. This is because groups usually model symmetry actions or such. In a representation, each position would be assigned a matrix that acts on a vector in a way that the product of two such matrices is the matrix of the two group elements (positions) executed after each other.

Btw: The Rubik's Cube Group is isomorphic to 
(ℤ_3^7 x ℤ_2^11) semidirect product ((A_8 x A_12) semidirect product ℤ_2). 

Two years ago, I did some representation theory of cyclic groups. It was quite demanding back then. And I have no idea what we know about the representations of semidirect products, but I am sure that there has been general, abstract work done on this.


----------



## Forte (Jun 8, 2014)

I've always wondered about this. Apparently (according to Math.SE), the representations of G = A\rtimes K are pretty nice when the normal subgroup A is commutative, which it is here. There's also Clifford theory which relates representations of a group to representations of one of its normal subgroups, so considering the cube group as a normal subgroup of all the permutations and finding representations of that might be easier?

It would be awesome for someone to do this though; I hope it happens


----------



## Sin-H (Jun 8, 2014)

Forte said:


> It would be awesome for someone to do this though; I hope it happens


In that regard, I should have chosen my upcoming master thesis in maths instead of theoretical physics  (not really, my knowledge of algebra is too rudimentary)


----------



## Christopher Mowla (Jun 8, 2014)

Sin-H said:


> If you have an invariant subspace, you can have a representation of your group that maps into the automorphisms of that subspace instead of ℂ^n. That is a subrepresentation (it is again a representation of your group). Now if your representation is reducible, then you can always write it as a direct sum of its irreducible subrepresentations - its matrices are block diagonal. This basically means that we have simplified the structure as much as possible.
> 
> The idea behind this is that we want to model groups via actions on a vector space. This is because groups usually model symmetry actions or such. In a representation, each position would be assigned a matrix that acts on a vector in a way that the product of two such matrices is the matrix of the two group elements (positions) executed after each other.


With "My Ultimate Commutator Challenge" proof, I have effectively solved all 43 quintillion positions with 2(4386) = 8772 case representation solutions. That is, there are at most 4386 even permutation representations of the 3x3x3, and we can make any odd permutation on the 3x3x3 an even permutation by doing a quarter turn to any of the 6 faces, so we simply double this number.

Therefore, I effectively show that, when thinking about representations of all positions, my representation reduces the number of positions of the 3x3x3 to be less than double the number of positions of the 2x2x2, and it reduces the 2x2x2 have about 1400 positions.
What might interest you in this particular topic is that, I have solved all even permutation corner cases in a manner in which shows a relationship between all solutions. (It might be also possible to do this for middle edges, but I didn't do this in my solutions for them because there were more than 3 times the number of cases, and doing this wasn't really required by the proof.)

With my solving method, we cannot solve/generate orientations (mixed with even permutations) using a product of two matrices (for reasons I cannot explain at the moment), but as I did for A_8 mixed with corner orientations (and which can probably also be done for A_12 mixed with middle edge orientations), we might be able to construct a reducible representation into just what you mentioned. That is, we cannot solve the cases using a product of matrices (except for the permutation cases in which all pieces are oriented...using a product of two permutation matrices), but we can do what I think you're trying to accomplish by writing the _structure_ of my corner solutions as a product of matrices.

We can let vector matrix be two matrices (instead of one), and the other matrix you mentioned be two "orientation matrices" (instead of one).

That is, we have a product of the following 4 matrices:

EvenPermutationCornerPosition_:= (orientation[i,2])(vector[j,2])(orientation[i,1])(vector[j,1]),
where 1≤i≤1002 and 1≤j≤21 (where i and j are obviously integers), and vector[i,1] and vector[i,2] are permutation matrices.

I believe some (orientation[i,2]) equals some (orientation[i,1]), for example, but I haven't looked at my corner solutions in a while to see for how many instances when this is the case.

Lastly, we could overcomplicate things by combining (orientation[i,1])(vector[j,1]) as one matrix and (orientation[i,2])(vector[j,2]) as one matrix, but keeping them as four matrices helps us keep just 2(1002) + 2(21) = 2046 "generator matrices" to construct all even permutation corner positions of the nxnxn cube. (As I've mentioned already, there are at most 2046 "generator matrices" since some are equivalent. The number of "generator matrices" is most likely less. If we combine our product into two matrices, then we cannot reduce the number of orientation[i,1]'s and thus we cannot reduce the number of "generator matrices").
Is this what you are talking about? If so, then I have already done this for corners without using matrices, and thus it is possible to do this.

I mention in my proof that doing something like this with my corner solutions can be used in a derivative work (since I basically did this for corner orientations, even though I didn't have to...but since I did it for corner orientations, it seems to be possible to also do it for middle edge orientations since they are 1 dimension less complicated than the corners).

EDIT:


cmowla said:



there are at most 2046 "generator matrices" since some are equivalent. The number of "generator matrices" is most likely less

Click to expand...

I just wrote a program to scan all of my even permutation corner solutions, and it found that there are at most 1474 different "generator matrices" (so I just reduced it from 2046 to *1474*, detecting 572 duplicates in my solutions for all even permutation corner positions of the nxnxn cube).

Perhaps there are more duplicates, but my program only eliminated duplicates based on how they were typed exactly in my solutions (it did a string comparison) and not based on if I wrote (8)(9) = 72 versus (9)(8) = 72, for example. However, I believe for the corner solutions, I was very careful to write them in a consistent manner (so 1474 is most likely correct, as far as my solutions are concerned).

There were 60 duplicates of orientation[i,1]'s, and there were 512 duplicates of orientation[i,2]'s. (No orientation[i,1] was equal to any orientation[i,2], and vice versa, as opposed to what I originally thought)._


----------



## Lucas Garron (Jun 9, 2014)

Representation theory definitely helps to visualize distributions for things like 2x2x2 or CO, because you can get the limit of the distribution by taking the limit of the powers of a matrix. I think I've posted some things about that, but I don't have time to dig it up right now.


----------



## Sin-H (Jun 9, 2014)

cmowla, I'll have a closer look at your work there. It already looks quite useful.


----------

