I agree. I think that there are definitely sometimes competing harms, but generally the rights of a minority group to talk about the oppression in their life is more important than safer people not being able escape from politics.
I mean it's not even a question of not being able to escape. If someone is discussing a thing I don't want to hear about, I can always not read that blog/forum/site/whatever. It's only a question of which of us bears the burden: is it on them to make sure they keep their stuff out of my face, or on me to keep my face out of their stuff?
I agree in some senses, but I have sympathy for people who get stressed out by politics but would like to see what their friends are up to on Twitter/Facebook/Mastodon.
I've ended up adding a small bit of text to all of my link posts on Mastodon specifically to make it possible to filter them out.
Indeed, that's part of the burden I mean – it's not just the logistical hassle of withdrawing from a given forum or making your own anti-trigger arrangements¹, it's also the fact that if you do, you miss other things posted there. Certainly in some cases there will be disadvantages to doing it either way round, and it's a question of which one is worse.
¹ I'm reminded that many years ago, by the luck of happening along at the right moment with a piece of related and easily adapted code in my back pocket, I once helped someone produce a custom S2 style which filtered entire posts out of their reading page based on detecting a particular trigger word in the text of the post, so that they could remain on LJ/DW and not miss anything else.
Hmm, I was never a Twitter person, and it wasn't politics, rather advertising + "comparison kills contentment" + the waste of time that took me off Facebook, but I do still miss keeping up with my real (but long distance) friends. So I can see the problem.
And those thankfully work both ways, for screening in as much as screening out stuff, depending on the needs and wants of the individual reader. And this ties back into the first item on your list today as well, particularly where "screening in" is concerned!
Would have been tricky with the way I'm automating it right now. For now it just says "link-post" at the start of the post. If I ever get the chance then I'll definitely do that.
Triggers are still on my list of "unpopular opinions" topics. (Thankfully, I've knocked out 2 of these lingering thoughts in recent months.) I already use Mastodon filters to hide many posts that use certain keywords, and I enjoy this feature a lot. I think that trigger warnings need to just go away in principle, instead opting for a culture that encourages appropriately labeling discussions.
As implemented in Mastodon programming, the content warnings are a visual mess, even if you choose settings that automatically show them so they aren't also a carpal tunnel hazard. Instead of being metadata that the viewer can choose how to handle, they are actively bad gui features. Separately from Mastodon, as a concept, they don't help trauma survivors. A recent meta-analysis reached the conclusion that they simply don't work for their intended purpose.
You said we should label conversations better, but also that you were against trigger warnings. And I don't understand what a trigger warning is that isn't a label.
A trigger warning is a very specific kind of label, which implies that the thing which is labelled is likely to be traumatising. A culture of providing content notes on a neutral basis provides all the benefits of trigger warnings, but doesn't cause the problem highlighted in the post you linked to of making people feel as though there's something wrong with them because an aspect of their life is being flagged as traumatic.
Ah! Yes, hashtags are already far into that territory, simultaneously an unobtrusive part of the grammar while also meta-tags that can be used for filtering. That alone might suffice if people and platforms used the technology more commonly. Hashtags don't have to be provided before/after/outside other text, because they are explicitly part of the text already. Use accurate words, and your text is already self-identified for topic indexing. An index which the reader can choose to use as labels, providing "Warning: Hazardous Content Ahead!" interruptions not specifically provided by the author.
If that explanation doesn't convey a distinction, then perhaps a counter-question would help. Why would a subject field on a webpage or a chapter title in a book ever need the words "Content Warning" included, if the subject or title was itself an accurate description of the following contents? I'm arguing that those two words chosen by a text's author will add no value. Instead, offer automation that allows readers to customize their personal experience so they feel more self-agency in the resulting interaction. Chapters and indexes are the solution used prior to the computing era, but they work only if they are accurate, not leading readers into inaccurately-identified distractions. Use accurate words, and automation can provide the rest. (Like hashtag filtering, but maybe in the future with AI providing useful "topic clouds" from our actual text, upon which our personal customizations will choose to present/hide discussions for us.)
I have a very bland example of why CWs are better than filtering. My anti-sports filter on Mastodon is now up to 48 keywords and I expect that I will continue to grow indefinitely. People who like sports keep finding new ways of describing sports. I'm a glutton for punishment because I browse the federated timeline (i.e. firehose of everything) to discover new people to follow, so I keep seeing sports
Now, I don't have PTSD around sports. I just loathe them. I have other filters (e.g anti-orange guy filter, anti-bird site owner filter) where my tolerance level for the subject matter is a lot lower. For those, I wish people were much more eager about CWing their own posts
I do the same with all things Trump, Musk, Twitter, or new Star Trek. I enjoy seeing those miniscule messages that something on that topic is currently hidden from me. The author still gets to enjoy their exposition, and I still get to enjoy my continued ignorance. :) Filtering (and its opposite, hashtag subscription) is very effective.
The content warning method leads to faster permutation explosion, though, since it requires every author to anticipate every possible reader's every possible reaction. When everything is content warned, then there's no longer any such thing as actual text exposition. I've seen at least once site that wanted all posts unrelated to that site's reason-for-existing to be content warned, so they wouldn't normally show up in full height on the local feed. It leads to everything being gated eventually. Which makes simple tagging of topics much more effective, since it can be done automatically without even the author's effort (but it's better with the intentional work). Leaving control over it to the recipient is what provides self-agency and psychological relief, not gating words... or so I'm arguing, at least.
But surely the people who are mostly likely to want to escape from the politics of a particular axis of oppression are precisely those who are affected by it? I'm not someone who find trigger warnings particularly helpful, but if I was, the things I would want warning for would be discussion of homophobia, transphobia, and anti-autism sentiment.
For a particular axis, I strongly suspect you're right.
But there's a lot of people who are just "I don't want to encounter politics at all." - and it's that which has been driving the discussion on Mastodon recently. On the one side people saying "This has been my escape from the real world, where I can talk about my passions." And on the other side various minorities saying "I don't get to escape from racism/sexism/homophobia/etc, and I'm not going to hide my posts to make you comfortable while you ignore it all."
Is it possible that the side who want a space free from politics are a mixture of those who want it because they're uncomfortable thinking about other people being oppressed and those who want it because the real world is full of people directing racism/sexism/homophobia/etc at them?
But you just acknowledged that the people who are most likely to want to escape from the politics of a particular axis of oppression are precisely those who are affected by it. That is evidence - probabilistic, admittedly, but is there anything more concrete causing you to assume otherwise?
But the evidence I've seen so far is of a bunch of (entirely white) people saying "we like our nice space here with no politics in it" while black people tell them that they aren't going to go back in their box and hide their lives away.
So while it's possible that that's the case, I've (as I said above) not seen any evidence of it.
To be clear, by: "For a particular axis, I strongly suspect you're right.
But there's a lot of people who are just "I don't want to encounter politics at all." - and it's that which has been driving the discussion on Mastodon recently."
What I meant was: Percentage of all people who are gay and want an escape from politics so much that they're willing to never discuss the interaction of society and homosexuality: 0.1% Percentage of all people who are straight, but specifically don't want to encounter discussions of homosexuality: 0.05%
Percentage of all people who are black, and want an escape from politics so much that they're willing to never discuss the interaction of society and race: 0.1% Percentage of all people who are white, but specifically don't want to encounter discussions of race issues: 0.05%
Percentage of all people who are women, and want an escape from politics so much that they're willing to never discuss the interaction of society and gender: 0.1% Percentage of all people who are white, but specifically don't want to encounter discussions of gender issues: 0.05%
Percentage of all people who are straight, white men who don't want to discuss race, gender, or homosexuality, because doing so makes them uncomfortable: 10% (Maybe 30%)
So for any individual axis, yes, it's more likely to be a person on that axis who is so fed up with it that they need an escape. But the big mass of complainers about discussion is still that wodge of somewhat-privileged-if-not-deliberately-oppressive people who "Just want to play my wizard school game without having to think about it too much." and object to people bringing "politics" into it.
Thanks for spelling that out. And I must of course recognise that I'm not seeing any of these discussions because I'm allergic to social media that aren't DW.
I think the thing that made me inclined to push back was the phrasing you used earlier of "This has been my escape from the real world" because whilst I agree that the proportion of people who are $oppressed group who want to never discuss that oppression is pretty tiny, I would guess that the proportion of people who want to be able to ringfence a space in which they can choose not to discuss it right now is probably close to 1. And contrariwise, I'd be surprised if the real world of the average straight white abled &c man is especially full of discussions of race, gender &c that they need to escape from. My experience is that far more of those kinds of conversations are intracommunity.
I agree that there are going to be a desire for ringfenced spaces. But I wouldn't expect those spaces to be on Twitter/Mastodon/general blogs. I'd expect a small community on a BBS/Discord/etc. where they could contain the discussion more easily.
I wonder if the "Discussions of politics are taking over" feeling is similar to the "Women are taking over the conversation" effect - where it only takes 30% of the speech to be by women before people feel like they've dominated the conversation. In a group of a dozen or more people you wouldn't need to have each person bring up an issue more than once a week before at least some people felt like "politics is taking over".
I feel like any worthwhile communication principles, like content warnings, if they become at all accepted, have worthwhile uses and accumulate a cruft of simplifications, and people lashing out when they're hurt in ways that request impossible things of people, and people wilfully-ignorantly or wilfully misusing them to push what they want. And it can be a challenge to unravel what's useful and what isn't.
I feel like people instinctively want "content warnings" (or "safe spaces") to come with a standard version that's universally correct, but many things don't work like that. It's worth having a generally accepted standard for "what CW are worthwhile in wider society", but different communities and micro-communities and individual blogs are likely to have their own standards too. Some things are useful for tag for everyone because most people find it somewhat disturbing. Some things are only practical to tag if you expect particular people to benefit from it.
Sometimes needs are incompatible and we need to balance them as best we can. The example of "can't add a warning around everything that's a central part of my identity" is a good one. But there can be still be variation: someone's blog or small community might say "I've no problem with people who exercise regularly but for me it triggers distress about weight loss, so please don't talk about it *here*". If someone has a religion that's primarily defined by bigotry, I might want them to warn for it in more circumstances than someone who has another religion.
I also think that this could be improved with technical capabilities. E.g. have a separate way of displaying "content tags that this post/tweet/story is ABOUT" and "content tags that this post/tweet/story contains in passing". Where maybe the first are displayed prominently to everyone, and the second are available if you want them but aren't the first thing you see. Somewhat separating "some people find this difficut" from "this is bad". So that it's possible to tag things you know SOME people find distressing to read, without making the first thing everyone sees a big list of terrible things. And so it would be easy for people to configure their reader so there's some things they don't see at all, or only if they really want to, and other things they might get a warning of. Which is just as relevant for things other than triggers: e.g. I muted some words because I was fed up of seeing some tweets, and I want nudity to be behind a cut in case I'm reading somewhere other people can see over my shoulder.
On the one hand, putting trigger warnings on anything that's likely to bother anybody is purely impactical.
Thought on that line: are spoiler alerts a form of trigger warning? What's being triggered is different than what's normally marked by such warnings, but it functions in the same way: warning people off something they might not want to cast their eyes upon, lest it have a negative effect on their mental functioning.
On the other hand, the answer of "if you don't want to see it, just avoid forums where it's likely to come up" is far too glib. How do you know where it's likely to come up? And the avoiding can cripple your life. My unwillingness to sell my soul to Mark Zuckerberg keeps me off FB where much of life is going on these days.
On that line of glibness, people have told me that if I don't like the Jackson movies, just avoid them. How the heck am I supposed to do that? I'd have to stop reading you; they come up occasionally. I'd have to drop out of all Tolkien discussion, because I don't know of any labeled "no movie talk here." I'd even have to quit my job editing a Tolkien journal, because while we don't publish movie stuff (mostly: we've got an article in the next issue comparing a specific technical aspect in the book and movies), without knowing the movies I wouldn't be able to be on the alert for movie assumptions seeping into discussions of the book.
I agree that we can't warn about everything. But on the other hand we can tag things pretty well. I think that spoiler alerts and trigger warnings are both essentially kinds of tags. Is there a big difference between "#spider" and "Trigger Warning: Spider" other than that the latter has an expectation associated with it?
And yes, I agree that you can avoid things a certain amount, but it gets trickier the more you want to avoid them, and definitely has side-effects.
Being able to insulate yourself from bigotry (which is what the call for CWs around lived experiences of racism is) is a privileged position. We must always prioritise the safety of marginalised people over the comfort of privileged people. I feel so strongly about this, that I made that a rule on my Mastodon server.
Personally, as a disabled person, I do want content warnings on discussions of ableism and disablism. I encounter those things too often in my day-to-day life and want to have some agency/choice in how I encounter them in spaces that are supposed to be for social interaction and leisure. Same for things like homophobia, transphobia, and queerphobia generally. It's why, for the last few years, Mastodon was the only social media space that I felt able to engage with on a regular basis. It has changed in recent months, following the influx from Twitter, and it's reaching a point where I no longer feel comfortable using it any more, so I don't know if it will remain sustainable. As others said upthread, if people are not content warning their stuff, it means I need to keep myself out of that space, which means I bear the burden of not really having any social spaces online that I feel comfortable in. Maybe that's no great loss. I do still dip in and read Dreamwidth, I just don't have the energy to post.
I think that warning/tagging is a good in-between point. Give you the tools to mute things you don't want to see, without hiding them from everyone else.
no subject
no subject
no subject
no subject
I've ended up adding a small bit of text to all of my link posts on Mastodon specifically to make it possible to filter them out.
no subject
¹ I'm reminded that many years ago, by the luck of happening along at the right moment with a piece of related and easily adapted code in my back pocket, I once helped someone produce a custom S2 style which filtered entire posts out of their reading page based on detecting a particular trigger word in the text of the post, so that they could remain on LJ/DW and not miss anything else.
no subject
no subject
And those thankfully work both ways, for screening in as much as screening out stuff, depending on the needs and wants of the individual reader. And this ties back into the first item on your list today as well, particularly where "screening in" is concerned!
no subject
no subject
no subject
no subject
no subject
You said we should label conversations better, but also that you were against trigger warnings. And I don't understand what a trigger warning is that isn't a label.
no subject
no subject
If that explanation doesn't convey a distinction, then perhaps a counter-question would help. Why would a subject field on a webpage or a chapter title in a book ever need the words "Content Warning" included, if the subject or title was itself an accurate description of the following contents? I'm arguing that those two words chosen by a text's author will add no value. Instead, offer automation that allows readers to customize their personal experience so they feel more self-agency in the resulting interaction. Chapters and indexes are the solution used prior to the computing era, but they work only if they are accurate, not leading readers into inaccurately-identified distractions. Use accurate words, and automation can provide the rest. (Like hashtag filtering, but maybe in the future with AI providing useful "topic clouds" from our actual text, upon which our personal customizations will choose to present/hide discussions for us.)
no subject
Yes, I agree, things that already have tags don't need anything more specific in general.
If only more sites allowed separate meta tags.
no subject
Now, I don't have PTSD around sports. I just loathe them. I have other filters (e.g anti-orange guy filter, anti-bird site owner filter) where my tolerance level for the subject matter is a lot lower. For those, I wish people were much more eager about CWing their own posts
A CW is just a subject line
no subject
The content warning method leads to faster permutation explosion, though, since it requires every author to anticipate every possible reader's every possible reaction. When everything is content warned, then there's no longer any such thing as actual text exposition. I've seen at least once site that wanted all posts unrelated to that site's reason-for-existing to be content warned, so they wouldn't normally show up in full height on the local feed. It leads to everything being gated eventually. Which makes simple tagging of topics much more effective, since it can be done automatically without even the author's effort (but it's better with the intentional work). Leaving control over it to the recipient is what provides self-agency and psychological relief, not gating words... or so I'm arguing, at least.
no subject
no subject
But there's a lot of people who are just "I don't want to encounter politics at all." - and it's that which has been driving the discussion on Mastodon recently. On the one side people saying "This has been my escape from the real world, where I can talk about my passions." And on the other side various minorities saying "I don't get to escape from racism/sexism/homophobia/etc, and I'm not going to hide my posts to make you comfortable while you ignore it all."
no subject
no subject
no subject
no subject
But the evidence I've seen so far is of a bunch of (entirely white) people saying "we like our nice space here with no politics in it" while black people tell them that they aren't going to go back in their box and hide their lives away.
So while it's possible that that's the case, I've (as I said above) not seen any evidence of it.
no subject
"For a particular axis, I strongly suspect you're right.
But there's a lot of people who are just "I don't want to encounter politics at all." - and it's that which has been driving the discussion on Mastodon recently."
What I meant was:
Percentage of all people who are gay and want an escape from politics so much that they're willing to never discuss the interaction of society and homosexuality: 0.1%
Percentage of all people who are straight, but specifically don't want to encounter discussions of homosexuality: 0.05%
Percentage of all people who are black, and want an escape from politics so much that they're willing to never discuss the interaction of society and race: 0.1%
Percentage of all people who are white, but specifically don't want to encounter discussions of race issues: 0.05%
Percentage of all people who are women, and want an escape from politics so much that they're willing to never discuss the interaction of society and gender: 0.1%
Percentage of all people who are white, but specifically don't want to encounter discussions of gender issues: 0.05%
Percentage of all people who are straight, white men who don't want to discuss race, gender, or homosexuality, because doing so makes them uncomfortable: 10% (Maybe 30%)
So for any individual axis, yes, it's more likely to be a person on that axis who is so fed up with it that they need an escape. But the big mass of complainers about discussion is still that wodge of somewhat-privileged-if-not-deliberately-oppressive people who "Just want to play my wizard school game without having to think about it too much." and object to people bringing "politics" into it.
no subject
I think the thing that made me inclined to push back was the phrasing you used earlier of "This has been my escape from the real world" because whilst I agree that the proportion of people who are $oppressed group who want to never discuss that oppression is pretty tiny, I would guess that the proportion of people who want to be able to ringfence a space in which they can choose not to discuss it right now is probably close to 1. And contrariwise, I'd be surprised if the real world of the average straight white abled &c man is especially full of discussions of race, gender &c that they need to escape from. My experience is that far more of those kinds of conversations are intracommunity.
no subject
I wonder if the "Discussions of politics are taking over" feeling is similar to the "Women are taking over the conversation" effect - where it only takes 30% of the speech to be by women before people feel like they've dominated the conversation. In a group of a dozen or more people you wouldn't need to have each person bring up an issue more than once a week before at least some people felt like "politics is taking over".
no subject
no subject
I feel like people instinctively want "content warnings" (or "safe spaces") to come with a standard version that's universally correct, but many things don't work like that. It's worth having a generally accepted standard for "what CW are worthwhile in wider society", but different communities and micro-communities and individual blogs are likely to have their own standards too. Some things are useful for tag for everyone because most people find it somewhat disturbing. Some things are only practical to tag if you expect particular people to benefit from it.
Sometimes needs are incompatible and we need to balance them as best we can. The example of "can't add a warning around everything that's a central part of my identity" is a good one. But there can be still be variation: someone's blog or small community might say "I've no problem with people who exercise regularly but for me it triggers distress about weight loss, so please don't talk about it *here*". If someone has a religion that's primarily defined by bigotry, I might want them to warn for it in more circumstances than someone who has another religion.
I also think that this could be improved with technical capabilities. E.g. have a separate way of displaying "content tags that this post/tweet/story is ABOUT" and "content tags that this post/tweet/story contains in passing". Where maybe the first are displayed prominently to everyone, and the second are available if you want them but aren't the first thing you see. Somewhat separating "some people find this difficut" from "this is bad". So that it's possible to tag things you know SOME people find distressing to read, without making the first thing everyone sees a big list of terrible things. And so it would be easy for people to configure their reader so there's some things they don't see at all, or only if they really want to, and other things they might get a warning of. Which is just as relevant for things other than triggers: e.g. I muted some words because I was fed up of seeing some tweets, and I want nudity to be behind a cut in case I'm reading somewhere other people can see over my shoulder.
no subject
Thought on that line: are spoiler alerts a form of trigger warning? What's being triggered is different than what's normally marked by such warnings, but it functions in the same way: warning people off something they might not want to cast their eyes upon, lest it have a negative effect on their mental functioning.
On the other hand, the answer of "if you don't want to see it, just avoid forums where it's likely to come up" is far too glib. How do you know where it's likely to come up? And the avoiding can cripple your life. My unwillingness to sell my soul to Mark Zuckerberg keeps me off FB where much of life is going on these days.
On that line of glibness, people have told me that if I don't like the Jackson movies, just avoid them. How the heck am I supposed to do that? I'd have to stop reading you; they come up occasionally. I'd have to drop out of all Tolkien discussion, because I don't know of any labeled "no movie talk here." I'd even have to quit my job editing a Tolkien journal, because while we don't publish movie stuff (mostly: we've got an article in the next issue comparing a specific technical aspect in the book and movies), without knowing the movies I wouldn't be able to be on the alert for movie assumptions seeping into discussions of the book.
no subject
And yes, I agree that you can avoid things a certain amount, but it gets trickier the more you want to avoid them, and definitely has side-effects.
no subject
I feel so strongly about this, that I made that a rule on my Mastodon server.
no subject
no subject