I know MediaBiasFactCheck is not a be-all-end-all to truth/bias in media, but I find it to be a useful resource.
It makes sense to downvote it in posts that have great discussion – let the content rise up so people can have discussions with humans, sure.
But sometimes I see it getting downvoted when it’s the only comment there. Which does nothing, unless a reader has rules that automatically hide downvoted comments (but a reader would be able to expand the comment anyways…so really no difference).
What’s the point of downvoting? My only guess is that there’s people who are salty about something it said about some source they like. Yet I don’t see anyone providing an alternative to MediaBiasFactCheck…
The alternative is to use your own brain.
The fact that people are so often so ignorant and/or ideologically blinkered that they can’t see plain bias when it’s staring them in the face is the problem, and relying on a bot to tell you what to believe does not in any way, shape or form help to solve that problem. If anything, it makes it even worse.
deleted by creator
I don’t think that’s what they’re saying at all, but I’d say if you think the bot’s source is then I don’t know what to tell you
deleted by creator
Of course I’m not “immune” - nobody and nothing is perfect.
But I pay attention and weigh and analyze and review and question, which beats the shit out of slavishly believing whatever I read.
deleted by creator
The only competition here is between relying on ones own judgment vs. relying on a third party.
deleted by creator
Okay
Right there. Obviously. In fact, that’s the exact point of a percentile - it’s a ranking system, which is to say, a competition.
No.
deleted by creator
Right - you’ll just assume that I see it as some sort of competition that I’m winning.
and you do all that based on facts.
you can analyze, review and question facts and then form an opinion, but first step is to be able to trust the facts you read and that is where the rating of the source may be useful (if you are not already familiar with said source).
unless “using your own brain” is euphemism for discarding facts which doesn’t fit your opinion, then you indeed don’t need to know anything about trustworthiness of the source 😂
No - actually I do the bulk of it based on presentation.
“Facts” fall into two categories - ones that can be independently verified, which are generally reported accurately regardless of bias, and ones that cannot be independently verified, which should be treated as mere possibilities, the likelihood of which can generally be at least better judged by the rest of the article. In neither case are the nominal facts particularly relevant.
Rather, if for instance the article has an incendiary title, a buried lede and a lot of emotive language, that clearly implies bias, regardless of the nominal facts.
That still doesn’t mean or even imply that it’s factually incorrect, and to the contrary, the odds are that it’s technically not - most journalists at least possess the basic skill of framing things such that they’re not technically untrue. If nothing else, they can always fall back on the tried and true, “According to informed sources…” phrasing. That phrase can then be followed by literally anything, and in order to be true, all it requires is that somebody who might colorably be called an “informed source” said it.
The assertion itself doesn’t have to be true, because they’re not reporting that it’s true. They’re just reporting that someone said that it’s true.
So again, nominal facts aren’t really the issue. Bias is better recognized by technique, and that’s something that any attentive reader can learn to recognize.
Double facepalm.
Bias can be subtle and take work to suss out, especially if you’re not familiar with the source.
After getting a credibility read of mediabiasfactcheck itself (which I’ve done only superficially for myself), it seems to be a potentially useful shortcut. And easy to block if it gets annoying.
The main problem that I see with MBFC, aside from the simple fact that it’s a third party rather than ones own judgment (which is not infallible, but should still certainly be exercised, in both senses of the term) is that it appears to only measure factuality, which is just a tiny part of bias.
In spite of all of the noise about “fake news,” very little news is actually fake. The vast majority of bias resides not in the nominal facts of a story, but in which stories are run and how they’re reported - how those nominal facts are presented.
As an example, admittedly exaggerated for effect, compare:
with
Both relay the same basic facts, and it’s likely that by MBFC’s standards, both would be rated the same for that reason alone. But it’s plain to see that the two are not even vaguely similar.
Again, exaggerated for effect.
MBFC doesn’t only count how factual something is. They very much look at inflammatory language like that, and grade a media outlet accordingly. It’s just not in the factual portion, it is in the bias portion. Which makes sense since, like you said, both stories can be factually accurate.
I haven’t seen any evidence that it does that, and quite the contrary, evidence that it does not - examples from publications ranging from Israel Times to New York Times to Slate in which it accompanied an article with clearly loaded language with an assessment of high credibility.
It’s possible that it’s improved of late - I don’t know, since I blocked it weeks ago, after a particularly egregious example of that accompanied a technically factually accurate but brazenly biased Israel Times article.
The bot wasn’t assessing the individual articles. It was just pulling the rating from their website. If you look at the full reports on the website they have a section that discusses bias, and gives examples of things like loaded language found in the articles they assessed.
Right, nor did I expect a rating based an on individual article - sorry if that’s the way I made it sound.
It’s simply that the rating of high credibility accompanying an article that was so obviously little more than a barrage of loaded language cast the problem into such sharp relief that I went from being unimpressed by MBFC to actively not wanting to see it.
Totally get that. And I’ve not been trying to push people to accept the bot, or saying that MBFC isn’t flawed. Mostly just trying to highlight the irony of some people having wildly biased views, and pushing factually incorrect info about a site aimed at scoring bias and factual accuracy.
NO, THEY DO NOT.
rex has a mange is factual statement, that can be investigated and either confirmed or rejected.
same goes for rex’s leash was inadequate and tom’s hold of the dog was weak.
there is a lot more facts in your second example, compared to first one.
no, they would not and it is pretty easy to find out - https://mediabiasfactcheck.com/methodology/
your powers of “paying attention, weighing, analyzing, reviewing and questioning” are not as strong as you think.
be careful not to hurt yourself when you are falling down from this mountain.
So are you saying that you wouldn’t be able to recognize my second example as a biased statement without the MBFC bot’s guidance?
Or did you just entirely miss the point?
i am saying you are lying about the same facts in your two examples and i am saying you are lying about how these two statements would be rated by mbfc, because you either didn’t exercise your imaginary analytical skills, or you are intentionally obfuscating.
you can read that. it is just above your last comment.
All I see here is someone whose ego relies on a steady diet of derision hurled in the general direction of strangers on the internet.
sure. go back to your brilliant analysis that doesn’t rely on facts. bye.
It sounds like if the bot did not like your favorite source…
No it doesn’t. That assumption just fits the strawman living inside your head.
Lol “I do my own research” vibes.
I’m not saying we should all take it as an objective truth. But I don’t have the time or motivation to read a selection of articles from every new source I encounter (and fact check their articles) so I can get an idea about the source’s reliability.