Faces, have your palms ready. This one's a stormer. It turns out that as of this week, big honcho game ratings body the ERSB will be eschewing the process of actually using human eyes and brains to judge the content of a good proportion of the games submitted, in favour of letting cold, unfeeling machines do the job for it.
That's bound to be fine, right?
The new system requires that developers fill in an in-depth questionnaire detailing all content that may be considered offensive by the masses. There are sections for the likes of sex, violence, strong language, and even poop. This questionnaire is then analysed by the ESRB's Skynet equivalent and a rating is thrown out of its robotic opinion-slot. I kid you not.
Obviously having not seen the full detail of the questionnaire it's tricky to weigh up just how accurate this computerised offense-evaluation is going to be, and the ESRB promises that questions will break down content in a huge amount of detail, but it's easy to imagine that there'll be a bit less room for tonal context, interpretation of directorial vision, or emotional understanding of the content in question. Human beings will evaluate the games after release, and there will be heavy penalties for developers giving duff information, but is it coasting a little too close to Controversial Video Game Industry Nightmare City to only check this stuff out properly once a game is approved and out?
So far the system is only tagged to rate XBLA, PSN and Wii/DSiWare games, with real, human adjudicators continuing to take the reins on the rating of full retail games. But what do you reckon? Is this a necessary streamlining of resources, or a Fox News headline waiting to happen? Of course, most games are already rated without hands-on experience, by way of submitted DVDs of the most relevent content, but is this system going too far?
Source: The New York Times
April 18th, 2011