Halo 4 devs speak out against sexism - Page 3 - OG Myth-Weavers

Notices


Worldly Talk

Civil discussion and debate on real world events and issues.


Halo 4 devs speak out against sexism

 
I know I link to these guys too much for video game discussions, but the Extra Credits episode regarding Harassment discussed harassment, bullying and hateful language in mutiplayer settings and potential ways to solve the issue without negatively impacting the community as a whole. They make the argument that the best way to disable trolls and harassers is to simply take away their microphone, so to speak, and I really like one of their recommendations: Since devs can track how often a player is muted, if someone gets muted more often then average (they cite at least 10% above the norm, though this would obviously need adjustment based on various factors per game) then they automatically begin matches muted. Players can choose to unmute them if they want to potentially deal with their language.

Yes, there's definitely potential this won't work, and there will almost definitely be a period of growing pains as the devs and players work out the kinks in the system. But the mere fact that they're acknowledging this as a major issue harmful to the gaming community and are trying to address it, instead of shrugging it off as part of gaming culture, is very encouraging.

Meh. I'm a semi-professional gamer, in that I have made money playing video games at a competitive level, but it's not my primary form of income. I am female. If people want to harass me because of my gender, well, I'll just beat them.

I don't have thin skin.

Quote:
Originally Posted by Tedronai View Post
I just think there's room between 'he wasn't being serious, so we won't take any action' and 'he said something that could be interpreted as offensive, so, whether that was the intent or not, he's banned for life'.
Does intent really matter though? If someone was offended, then someone was offended. It's rather childish to tell "jokes" at the expense of others anyways, and it's not remotely funny.

Quote:
Originally Posted by Ancient Heirloom View Post
Does intent really matter though? If someone was offended, then someone was offended. It's rather childish to tell "jokes" at the expense of others anyways, and it's not remotely funny.
Oh, trust me, intent matters a lot. So does what was actually said. Having been the target of a vitriolic rant about how I was a sexist pig simply because I asked if I could help, I will state for the record that allowing a subjective definition of "offensive" is a disaster waiting to happen.
The circumstances in question were simple - a person I knew had dropped stuff and I offered to help. In her mind that was demeaning, because it implied that she was incapable of doing it herself, whereas to my mind I had just offered to help. And from there the rant started. Walking away from it didn't help because she followed me so she could continue to abuse me.
The same person has also accused me of being an unfeeling b@$t@rd because on a later occasion when I saw she had dropped stuff I didn't offer to help. And regularly accuses people of being unhelpful chauvinist pigs when they don't offer to help her, because it's demeaning to make her ask for help. Yes, she has some serious problems.

And this illustrates a point - there must be some objective assessment when a complaint is laid, because the very fact of a complaint does not make the complainant right, nor does it make the person complained about wrong. You *can* not allow subjective definition of what is offensive or not.

One of the things you also have to look at with comments is if the people your talking to know each other outside of game. Joe and Eddy are best friends in real life. They talk the talk all the time, calling each other names that offend Everyone. They both get in game and start talking normally to each other, but everyone hears it. Their intent is to talk normally, but everyone else is offended. Do you understand why someone would report them.

Same pair know that if they are reported for making their normal comments will cost them their accounts. Think they will temper their language a bit.

A twelve year old that wants to act like as idiot gets on the game and starts insulting everyone, just to troll. When asked about it, he says he didn't mean it. And does it again.

I think that the best judge of what is offensive can be a guideline. I also think that taking into account that people do get mad about things and make comments without thinking is going to happen. I would however pull the logs of anyone reported to see what was said and who said it. If Joe makes a comment and a third player reports it, look at the logs and inform Joe and Eddy their language earned them a suspension from the game, and a black mark on their accounts, if it continues to get reported their accounts will be banned.

It sounds to me like there has been a culture of permissiveness or relative tolerance in video games toward "hate-speech" Previous policies seem to have been poorly implemented.

The Halo 4 devs are saying they are upping enforcement. I think it will upset a lot of people. I think it will be hard to do well. However, I think that by trying to change the atmosphere of the game, they will drive some haters away, but they will allow more "care-bears" to play their games. I still think that will be a net positive number of fans that will enjoy their games.

At my most optimistic, I think it's a good attempt at improvement of the online community. At my most cynical, it a good way to expand their subscriber base into another market.

Quote:
Originally Posted by Little_Rudo View Post
I know I link to these guys too much for video game discussions, but the Extra Credits episode regarding Harassment discussed harassment, bullying and hateful language in mutiplayer settings and potential ways to solve the issue without negatively impacting the community as a whole. They make the argument that the best way to disable trolls and harassers is to simply take away their microphone, so to speak, and I really like one of their recommendations: Since devs can track how often a player is muted, if someone gets muted more often then average (they cite at least 10% above the norm, though this would obviously need adjustment based on various factors per game) then they automatically begin matches muted. Players can choose to unmute them if they want to potentially deal with their language.

Yes, there's definitely potential this won't work, and there will almost definitely be a period of growing pains as the devs and players work out the kinks in the system. But the mere fact that they're acknowledging this as a major issue harmful to the gaming community and are trying to address it, instead of shrugging it off as part of gaming culture, is very encouraging.
I love that episode of EC. I wish it worked, because the auto-mute-for-too-many-mutes is already in Halo: Reach (implemented by 343), and apparently it's not doing enough.

I'll say again -- It's fairly simple to identify specific words and forms of speech that are bannable offenses. If people want to play that game, they aren't allowed to use those words and phrases. There isn't ambiguity or grey water; in public forums, one does not speak in certain ways.

The vast majority of people understand the distinction. They curtail the abusive language when they're at grandma's, or in the classroom, or out to dinner on a date or whatever. Even a live sporting event is more civil, because people will have the crap beat out of them if they behave like feral foul-mouthed children at a sporting event.

When the developer states that their game is no longer the place for abusive language, people will need to make the switch and operate as if they are in one of those other arenas. The only people who will have a legitimate problem are the ones who are incapable of regulating their behavior when at grandma's house or in the classroom. Which, frankly, is closer to 0% of the population than any other number.

Quote:
Originally Posted by Muggie2 View Post
Oh, trust me, intent matters a lot. So does what was actually said. Having been the target of a vitriolic rant about how I was a sexist pig simply because I asked if I could help, I will state for the record that allowing a subjective definition of "offensive" is a disaster waiting to happen.
This is a very strange example. There's only one person behaving in an inappropriate and offensive manner, and only one person who would be banned. In Halo 4, people don't get to complain about you because you were inappropriately chivalrous. They can only complain about the player who launched into inappropriate invective for three minutes and wouldn't stop when repeatedly advised to.

Quote:
Originally Posted by Atlictoatl View Post
There's only one person behaving in an inappropriate and offensive manner, and only one person who would be banned. In Halo 4, people don't get to complain about you because you were inappropriately chivalrous. They can only complain about the player who launched into inappropriate invective for three minutes and wouldn't stop when repeatedly advised to.
Yes, but that person is complaining about me being sexist. They were offended. Now if the person complaining gets to define whether something is offensive or not, I would be the one getting banned, not her. Because she took offense, not because I had said anything offensive. This is an example of subjective offense.
What I have said is that there must be *objective* standards, and that allowing offense to be defined subjectively would be a disaster. The example I gave is a reason why.

On the other hand, the simplistic form of declaring certain words as inherently offensive is also flawed. The example I gave was the Yahoo user group word filter, which would refuse to post certain word and report you if you used them. The problem is that is just searched for character strings and rejected anything that contained that string, ignoring punctuation. So you couldn't talk about Japan because it contains "Jap", problems couldn't be nipped in the bud due to "nip", asking the shortened form of "Who are" was out, and you couldn't scrape anything off your shoe. Some people's names couldn't be used because they included a banned string, and one person had their account suspended because of their name. Their actual name.

The only answer I can see is for the moderators to take an active role, and have any and all cases of reported sexist or racist language investigated by them, and assessed against some form of standards. To have a simple binary decision model is heavy-handed at best, because there will be a lot of shades of grey. Not fifty, please. But, as an example, Halo 4 is a worldwide game, and there will be players from a number of different cultures, each with different insults and unacceptable language. Should a player from the US be banned because they used a common word which happens to be a racist insult in another country (or vice versa)? What of language within a particular occupational group which contains words which are perfectly valid in that community but can be considered racial insults outside it? (Yes, I can think of examples of each of these off the top of my head)

Subjective definition of offensive is unworkable. Word recognition algorithms are unworkable. Banning for the first offense is heavy-handed (especially if the word is used in a totally non-offensive context). These are all downsides of having a zero-tolerance policy for sexist or racist language. The aim is good, but the implementation is going to be very tricky for the reasons I've listed above.

Quote:
Originally Posted by Muggie2 View Post
Yes, but that person is complaining about me being sexist. They were offended. Now if the person complaining gets to define whether something is offensive or not, I would be the one getting banned, not her. Because she took offense, not because I had said anything offensive. This is an example of subjective offense.
What I have said is that there must be *objective* standards, and that allowing offense to be defined subjectively would be a disaster. The example I gave is a reason why.

On the other hand, the simplistic form of declaring certain words as inherently offensive is also flawed. The example I gave was the Yahoo user group word filter, which would refuse to post certain word and report you if you used them. The problem is that is just searched for character strings and rejected anything that contained that string, ignoring punctuation. So you couldn't talk about Japan because it contains "Jap", problems couldn't be nipped in the bud due to "nip", asking the shortened form of "Who are" was out, and you couldn't scrape anything off your shoe. Some people's names couldn't be used because they included a banned string, and one person had their account suspended because of their name. Their actual name.
These are adsurdist examples. Hate speech over Halo 4 headsets is not a word typed into a Yahoo filter. Your first example also doesn't apply -- the ban is not on behavior, it's on speech. You can teabag your victims all you like, you just can't scream, "Die you f-ing faggot" into the headset while you do so. If your example was that you said to her as you picked the item up for her, "Here you go, milady" and she then chose to flag you (not aggressively rant at you) for offensive behavior, your example at least becomes relevant to this discussion.

However, since the only thing we have to go on are a few short, non-instructive paragraphs about the intent for a new policy and we have absolutely no information about how that policy will be implemented, it's extremely early to declare that the policy is flawed and will likely be a failure. You have no means on which to determine whether, "Here you go, milady" can be flagged as offensive, how it would be flagged as offensive, whether there is any process of determination on the part of the banning body, or whether there is any appellate process.

On the other hand, I think we can safely say that "Die you f-ing faggot" is well within the parameters of a bannable offense, and getting rid of it is a wholly good thing.

It may very well be that the implementation is executed such that people are flagged and banned inappropriately, but to exclaim from the very first mention of a new policy that the policy is flawed and doomed is to be excessively cynical and critical. You may not be doing that, in which case I address the general alarmist tone of portions of this thread.

Quote:
The only answer I can see is for the moderators to take an active role, and have any and all cases of reported sexist or racist language investigated by them, and assessed against some form of standards. To have a simple binary decision model is heavy-handed at best, because there will be a lot of shades of grey. Not fifty, please. But, as an example, Halo 4 is a worldwide game, and there will be players from a number of different cultures, each with different insults and unacceptable language. Should a player from the US be banned because they used a common word which happens to be a racist insult in another country (or vice versa)? What of language within a particular occupational group which contains words which are perfectly valid in that community but can be considered racial insults outside it? (Yes, I can think of examples of each of these off the top of my head)

Subjective definition of offensive is unworkable. Word recognition algorithms are unworkable. Banning for the first offense is heavy-handed (especially if the word is used in a totally non-offensive context). These are all downsides of having a zero-tolerance policy for sexist or racist language. The aim is good, but the implementation is going to be very tricky for the reasons I've listed above.
Obviously, there are issues that will need to be addressed, and you cite some reasonable ones. It's not our job to devise the system, though. Nor is it to advise those whose job it is. Let's at least wait and see what the system is before we decide it's broken and unworkable.




 

Powered by vBulletin® Version 3.8.8
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
User Alert System provided by Advanced User Tagging (Lite) - vBulletin Mods & Addons Copyright © 2024 DragonByte Technologies Ltd.
Last Database Backup 2024-03-19 11:26:08am local time
Myth-Weavers Status