Michael Wu, Ph.D. is Lithium's Principal Scientist of Analytics, digging into the complex dynamics of social interaction and group behavior in online communities and social networks.
Michael was voted a 2010 Influential Leader by CRM Magazine for his work on predictive social analytics and its application to Social CRM.He's a regular blogger on the Lithosphere's Building Community blog and previously wrote in the Analytic Science blog. You can follow him on Twitter or Google+.
This is the sequel to my last post: Beat the Cheat: Stop Gaming the Gamification. In that post, I presented a psychologist/economist’s solution to the problem of cheating in gamification. It turns out that we don’t have to build a bullet proof gamification system. We just have to make it hard enough to game, so the cheaters don’t feel the reward is worth the effort they spent to “game the system.” I talked about two levers that you can pull to play this psychological game:
Today, I’d like to continue this discussion and show you practical ways to affect these two levers.
Decreasing the Perceived Value of the Rewards
There are many effective ways to lower the perceived value of the reward, so what I describe here is, by no means, complete. However, they are practical ways that you can decrease the perceived value of the rewards without demotivating the players too much.
Increasing the Effort Required to Game the System
There are many ways to make the gamification scheme harder to cheat. Most rewards in gamification are based on metrics or some cryptic combination of metrics. We can certainly make the combination so complex that the player can’t figure out what they need to do in order to get the reward. This is precisely what Google did to increase the effort required to game their PageRank algorithm. Although this simple “brute force” approach does work, it’s probably not the most economical way. In my experience,there are two key elements which are particularly effective:
(A) Metrics That are Less Susceptible to Gaming
The first element is to use metrics that the players do not have direct control over. I’d like to introduce two classes of metrics:
The idea is that we should reward a player based on metrics that are accumulated within a given time frame; and these metrics should measure the number of unique users or the amount of unique reactions to his actions. Let me illustrate what I mean with a couple of examples.
The number-of-retweets I receive is a reciprocity metric, because it depends on other people’s reaction to my tweets. But it is NOT unique-user-based, because each tweeter can retweet me multiple times (they can even retweet the same tweet from me multiple times). So I may get 100 retweets, but this wouldn’t be so impressive if they were all from the same user. That is why we need the reciprocity metric to be based on unique-users. So, what if we use the number-of-unique-retweeters? This would be a unique-user-based reciprocity metric. However, it is not time-bounded, because it is ignorant of when a tweeter retweeted me. I may get 100 retweets from 100 unique users, but this metric wouldn’t be so impressive if it is accumulated over a span of a year. That is why we also required the unique-user-based reciprocity to be time-bounded. Therefore, the number-of-unique retweeters-per-week would be an example of a TUUR metric.
The second example is from my previous post. Instead of rewarding a player for the number-of-messages he posted in a community (which he can control directly), we should reward him for the number-of-likes (or kudos) he received per month. Every player has direct control over the quantity of messages he posts, but he can’t directly control how many people will “like” his messages since that depends on other players. Because each player can only like a piece of content once, this reciprocity is unique to a piece of content. Therefore the number-of-likes-per-month is a TUCR metric.
Although TUUR and TUCR metrics are more resistant to gaming, they are still technically gamable. Players can certainly team up and give each other “likes” or retweet each other. However, to successfully game a gamification scheme that uses these metrics, they would need to coordinate a large number of users over an extended period of time. This is not easy to achieve. Even if some player is able to pull this off, it may not be worth the effort for just getting a few more “likes” or unique-retweeters for the week.
(B) Total Transparency and Social Shame
This brings us to the second element, which is to leave a transparent audit trail for everyone to see. Although TUUR and TUCR metrics are technically cheatable, if we make all the reciprocal actions completely transparent down to the atomic event, then people are much less likely to cheat.
Total transparency means we need to make visible all the data on who, what, and when (possibly where, if geo-location is available) an event (such as a “like”) took place. For example, if I received 100 likes, then total transparency would allow everyone to see which 100 individuals (who) liked me, which piece of content (what) they like from me, and precisely when they clicked the like button for every like I received. Total transparency is helpful because not only does it make good behavior visible, it also makes any cheating behavior discoverable and sometimes blatantly obvious.
Supposed we have two players (say, User123 and User456) on the leader board of some TUUR or TUCR metric. If we make it easy for everyone to see all the events contributing to each player’s TUUR/TURC metric, then it would be pretty easy to discover any coordinated gaming activity. For example, maybe we will discover that 90% of User123’s retweets came from User456 and vice versa. Then it would be pretty obvious that something fishy is going on.
When this fraud discovery process is made simple, people’s cheating behaviors can be exposed easily in public (i.e. social shame). Knowing this, a player would probably hesitate before coordinating any dishonest activity. Even if some of the players don’t care about their reputation, other people might not conspire with them knowing the possibility of social shame. This can definitely help limit the amount of collusion.
Alright, this concludes our exploration on practical ways to stop cheaters from gaming the gamification system. I talked about several different types of rewards that we can use to reduce its perceived value without attenuating its motivation too much. I also introduced some metrics (i.e. TUUR and TUCR metrics) that we can use to make any gamification system harder to cheat. Finally, to increase the efficacy of these metrics, total transparency and social shame can definitely help to deter cheaters.
Next time, let’s talk about something new. As always, discussions are welcome here and always open. See you next time.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.