I

The obvious variation is to add more signs into the game: say “rock-paper-reviewer-editor-scissors”. It in inobvious, though, whether rock beats reviewer or the other way round. (Some of those reviewers are *tough.*)

One way is to draw a pentagram in a single line (making each segment an arrow pointing the way you draw it) and then to draw a circle round it (marking the direction you draw). Then you can treat the points of the pentagram as the five signs, with each point originating two arrows indicating two other points, and being indicated by two of the others; which gives two signs that submit, and two that conquer.

Also, probably the most Satanic game design in history.

This addition alone, though, doesn’t make the game more interesting, just more complicated.

One could say winning or losing by the circle is different from winning or losing by the pentagram: but how? (Through a pentagram loss, you forfeit *your very soul?*)

Ib

As for the simpler obvious variation: Rock-paper-plasticknife-scissors, the game with four sign(al)s/gestures, is a bit iffy. You tie with the same; you lose to one, win against one… but what about the fourth? If it is a tie, one half of games end in a tie. It can’t be a win or a loss, because that would make some signs better than others. If rock wins against against plasticknife, then plasticknife loses to both rock and scissors, wins against paper and ties against itself — it would always be better to play rock (WWLT) than to play plasticknife (WLLT).

Any odd number of gestures can be arranged to be equally good; no even number above two can be without increasing the number of ties.

Then again, with more gestures this just isn’t interesting. Who cares if Horned Goat loses to Hanged Man or Lone Dalek, if it’s the same loss either way?

Ic

Rock-paper-scissors doesn’t have the same kind of a hierarchical arrangement as playing cards do — there you don’t get to choose your cards, so you can have cards that are better than others, most of the time. In rock-paper-scissors, you need to have options that are somehow equal (by not knowing the other player’s choice, if in no other way), because why would you choose a sign that was less likely to win?

Consider the card game known as “Red”. Both players draw a card from a deck, face down. Both then reveal their card. A red always beats a black; below that, a bigger card always wins. Not a particularly interesting game, but perfect for high school students really tapped-out after an unwelcome lesson. If you could call the card you wanted in Red, you’d be screaming “Ace of Hearts!” all the time — and having a tie with your opponent, who would be shouting the same. (Or “Diamond Ace!” — it would be a pointless, melodramatic game either way.)

This illustrates that either your choices can’t matter, or you must have no choice at all… which is a depressing prospect, but rock-paper-scissors is not much of an intellectual game anyway, as far as its mechanics go. The psychology can of course be very interesting, especially when you keep playing it. (“Is she going for scissors again? Third time in a row? But what if she’s counting on me pulling rock, and intends to play paper? Then I should play scissors— unless—“, et cetera. Put two psychologists to work playing each other, and they’ll probably stare at each other for five minutes, and then one admits defeat.)

It would be ideal to make a game with mechanics just complex enough to generate interesting psychology. Rock-paper-scissors isn’t quite complex enough. (Then again, it’s better than tic-tac-toe, a game where any player smarter than your average calculator can always tie, and two such players will always tie.)

II

The obvious biological variation would be to play the game with both hands at the same time. But this too makes the game different — in this case quicker (two at the same time!) — but not more interesting.

Then again, this gives more scoring conditions: a double win, a small win (win one, tie one), a fighting tie (win one, lose one) and a full tie (tie both). (The first two are, from the other end, a double lose and a small lose.)

By crunching numbers, the likelihood these outcomes is, assuming the players are dumb automatons:

11% Double win (W/W)

22% Small win (W/T)

22% Fighting tie (W/L)

11% Full tie (T/T)

22% Small lose (L/T)

11% Double lose (L/L)

— one percent is lost in the rounding. (Use 1/9 and 2/9 if you want to be exact.) If you take the first two as “wins”, the middle as “ties” and the last two as “loses”, then the odds are the same as in a normal one-handed game of rock-paper-scissors; there’s just a bit more additional detail within each category. To make a sensible variant of the game, this added sensitivity should be utilized somehow. (Note the two ties aren’t different in any intuitive way; both players get the same result in each. Some new rule could distinguish them for some other new aspect of the game.)

Mind you, this could be a decision tool if you needed two exit conditions —

Double win : We’ll do what I want, all the way

Small win: We’ll do what I want, for the most part

Fighting tie: Fine, let’s do nothing; I’ll go home, this isn’t working!

Full tie: Let’s try to split everything evenly, okay?

— but I’m not sure anyone needs help for making decisions like that.

The mechanic is there; the game just needs an addition that uses it.

III

The third variation, a sort of obnoxious meta thing, would be to have three players, each with two hands, each playing a one-handed game with each of the other two at the same time.

Call the players A, B and C. Three games resolve at the same time, each with three possible results (win/lose, lose/win, tie); this gives twenty-seven different total outcomes. Those form four categories, the way I choose to group them.

I’ll write “A>B” for “A wins over B”, “A<B” for “A loses to B” and “A=B” for “A and B tie”.

1) A<B<C<A : a roundabout tie. A>B>C>A is the same thing: each player has one win, one loss, and there’s no assigning rank to them.

2) A=B=C, every game ties; everyone flashes the same sign. A great tie! Also, the appearance of a gang meet-up.

3) A>B(sthng)C<A — Strong ranking; One player wins both of his/her games: victory! (I’ll call it that to distinguish it from “wins”, which are the results of individual games.) The third game, between the two losers, either gives second and third places, or a divided second if they tie:

3a) Full rank: A>B>C<A. Player A takes first place (wins over B and C), Player B the second (wins over C, loses to A), Player C the third (loses to A and B). Alternately, A>B<C<A. (It’s probably sensible to say A>B>C=A and A<B<C=A belong here as well; one can’t argue for any different order than the obvious one.)

3b) Weaker rank: A>B=C<A. Player A is the winner; the other two both lose.

Note that there can’t be a case where two players win both their games: the game between them can have at most one winner. This three-player game produces either one victor (above) or less (below).

4) A>B(sthng)C=A — Weak ranking; No player can be ranked as the best of the three. (A>B>C=A is already included in 3a.)

4a) Weaker rank: A>B<C=A. There’s no victor, just two winners; but B sure loses.

4b) Weakest rank: A>B=C=A. There are two ties and one win-lose; thus, a winner, a loser, and one the game didn’t decide about. (Also, A<B=C=A.)

I think one has to think that a tie means “no decision”, because one can’t really interpret a tie as “are equal” because of situations like A>B=C>A. If B and C are equal, why is one strictly better than A and one strictly worse? Unless you interpret that as collapsing > into into =; how you interpret the mechanics makes the game.

As for the improved version of rock-paper-scissors, I have no idea. I’m just throwing up mechanics.