Hi there,

My name is Chad, a recently graduated high school student who had the privilege of working with the TDE team over the summer of 2021, where I gained some insight into the project and the ideology behind it. Having just finished the IB Diploma programme, I’ve come to realize that there is significant overlap between the principles of contemporary education and the goals of The Daily Edit, and I’d like to take this opportunity to share with you the view of a student fresh out of the system.

The IB Diploma programme is a rigorous course designed to push the breadth and depth of knowledge in its students. Part of this mission includes a requirement for students to complete independent research across all academic disciplines, while also completing an additional epistemological course known as “Theory of Knowledge”, where students must consider the origin, attainment, and distribution of knowledge and ideas across several assessed components. This builds upon existing research skills gained in the PYP and MYP programmes, where students must consistently demonstrate an ability to critically evaluate the validity of sources against a host of criteria, with essays, presentations, and projects requiring extensive lists of academically cited sources that the student must be able to justify the use of.

While I cannot speak for other school curriculums, I must agree that the rigorous requirements for student research within the IB courses are excellent preparation for students to navigate life after high school, whether that be in an academic, corporate, or private setting. Students are trained to spot misleading or outright false information, and to seek out alternative sources in such cases. While this is indeed important, what has recently occurred to me is a question I haven’t thought to ask before: why exactly is all this necessary?

We can all agree that these skills are most definitely important. “Fake news” is more prevalent than ever before, and a big reason for this is, in my opinion, accountability. News and information within modern society is largely controlled by large media organizations, which we must acknowledge are in fact corporate businesses – and one of the primary goals of any business is to make a profit. In the media world, this is most easily accomplished through eye-catching, exaggerated headlines or one-sided stories, or through a reporting focus on emotionally-charged topics such as politics, which maximize user engagement.

But what exactly happens if an organization reports something false? Sometimes, alternative sources may report the correct version, but will rarely call out the actions of their competitors. For a really bad incident, the government may get involved, but for every time this may occur, there are likely thousands where it does not. The majority of the time, an organization can simply edit or recall the article with minimal consequence. 

This can make catching misinformation extremely difficult. How often does a person go back to check on an article they have already read? The likelihood of a reader actually noticing any changes after initially encountering the story is rather minimal. Furthermore, there is little incentive to use multiple sources in a casual setting. After all, reading through the same story dozens of times—just in case there might be something you missed out on—is boring, time-consuming, and inefficient.

The result of this system is that the responsibility of identifying misinformation lies almost entirely with the sole reader, who, lacking the resources and knowledge base necessary, is all too often ill-equipped to do so.


What The Daily Edit offers is accessibility, for any child, teenager, or adult to be well-informed citizens regardless of their educational background.


This is a notion supported by the design of modern education systems. Unable to hold the organizations of the present accountable for what they report, we as a society are instead forced to train our future generations to individually combat the culture of misinformation they will inevitably encounter, through incorporating it into curriculums such as the IBDP. Students must learn media literacy skills, because those who do not possess them will be increasingly vulnerable in the modern world.

However, this still leaves a huge problem. Not everyone has access to educational systems that incorporate these ideals, meaning that a large portion of the next generation are growing up ill-equipped to deal with the world of media they will enter. Furthermore, the majority of working adults came through the school system of the past, one completely unprepared for this future. Even for those who do have access to curriculums like the IBDP, these skills are difficult to retain without active usage, which means they may still be vulnerable to some degree. 

Everything I’ve mentioned thus far is fairly common knowledge, and may seem rather obvious. However, it provides relevant background to my next topic of discussion – why I am so excited about The Daily Edit and the potential it holds.

Put simply, what The Daily Edit does is, for the first time, hand the user the ability to hold those companies accountable independently. Crucially however, it does so by giving its readers the tools to do so in a simple, intuitive and timely manner. A user does not need to rely on profit-maximizing corporations to point out fallacies, but neither do they have to read the entirety of each article from each news source to do so. Instead, they are able to easily compare and contrast differing versions, with missing details and media trickery made blatantly obvious by the app – and all accomplished via bias-free machine learning technology trained on millions of examples. It brings the benefits of in-depth research skills without nearly the same time or effort costs, requires no real training to use, and is accessible to people from any and all backgrounds. However, above all, it introduces objectivity into a field that is inherently subjective, cutting through the profit-chasing fluff of mainstream media to get to the information that really matters.

I’d like to emphasize that by no means are the aforementioned research skills necessarily inferior. However, what The Daily Edit offers is accessibility, for any child, teenager, or adult to be well-informed citizens regardless of their educational background. It offers a lifeline to those who may be ensnared in misinformation campaigns, pyramid schemes, or cult followings, or to those with limited opportunities to see the ‘other sides’ of the story at hand. Fundamentally, The Daily Edit offers, for the first time, equality of information – and in the information age, little else could be more crucial to a sustainable way forward as a society.


Their mission is to bring the world closer together


However, there is another possibility that I am extremely excited for. The Daily Edit’s analyses hold immense promise not only in the general informing of society, but also specifically in the education of children in schools. The app provides access to high-quality information in the classroom, alongside detailed deconstructions of different sources and viewpoints and their interrelations. The Daily Edit is also conducting research into further features that may be relevant both within and outside of the classroom, such as showcasing the performance of journalists and publishers on a given topic over time. Integration of this technology into modern education will allow children to attain an unmatched understanding of knowledge and its role in contemporary society, and provide incredible benefit to their preparation for dealing with the mainstream media of the future.

The Daily Edit has made it clear that their mission is to bring the world closer together through providing better access to information, and part of this mission includes getting involved with education. If you aren’t already aware, The Daily Edit offers free subscriptions to all .edu email accounts, so if you are a teacher or student I strongly encourage you to try out the technology and reap its benefits. You may find that it completely changes your relationship with mainstream media, I know that it did for me. 

If you find that the technology helps you out, feel free to share how it did so on your preferred social media channels. Make sure to mention @dailyeditapp in your post.   

I hope that this post has given some insight into a modern student’s perspective of The Daily Edit and its potential, and that you’ll enjoy using the app as much as I do. Cheers!

Chad Rossouw 

In this modern world information travels faster than the speed of reason, so at The Daily Edit we go to great lengths to make our analyses as unambiguous and unbiased as possible. We want you to feel confident that you’re seeing the full story. This ethic permeates through every part of our operation, from how we train machine learning models to whom we hire. So, given that we tell you how trustworthy an article is compared to its peers, why should you trust us?

This post explains how our whole pipeline works, from selecting articles to be crawled, to finding the story’s details, to scoring each article. We’ll cover the parts that are completely objective, and the parts that have some subjective elements to them with an explanation of our rationale. We’ll even show you where we don’t perform so well. We’ll do our best here to explain it all in layman’s terms and will follow-up with several other blog posts going into the raw, unadulterated technical detail.

Overview

Analyzing the news takes a lot of work from a number of different pieces of software. A high-level overview of what happens can be seen below.

Before we dive in it’s best to list a few terms that will come up frequently and how we interpret them:

How do we choose articles?

Firstly, before we can do any kind of analysis we need to know that there is even a story. To do this we maintain a database of over 13,000 news sources which is updated weekly with new sources as we encounter them. Our crawler operates on a schedule. It periodically wakes up and starts looking at the sources to find any new articles they have published. When it encounters a new article it puts it in a scratchpad with all the other new articles that are found. At the end of a crawling run we gather articles with similar content and group them together, calling this grouping a ‘story’.

Stories evolve over time. More sources appear, existing sources edit their articles, some even remove their article altogether. To cover all these cases we have logic around when we reprocess stories. For starters, we refresh stories at most every six hours. We feel this is frequent enough for us to provide real value with our analyses without overburdening our servers with redundant work. During one of these refreshes, if we encounter an article we already have we’ll only refresh it once every 12 hours. This means we could miss some frequent edits on a breaking story, but by the time things have settled down we’ll have covered the changes. Keeping this relatively infrequent also reduces the burden we place on our sources’ websites.

While we don’t filter or discriminate sources in any way, we do have one technical limitation that causes us to remove some: poorly structured HTML or websites that load more articles infinitely. When we crawl a news article we’re crawling the HTML their website serves us. There are recommendations and some poorly-followed standards but for the most part HTML is the Wild West, the number of ways it can be organized are infinite. Most of the time we encounter reasonably well-structured HTML and we can extract the text content with ease, sometimes it’s a little more difficult and requires a sophisticated model to parse, other times it’s just plain diabolical. When we encounter a pathological source which our application can’t work with, we remove them from our database. This means that we might miss a detail or two, particularly if that source had the scoop, but trying to analyze text that might not be the article content will pollute all the other articles we cover with things like advertising text or image captions.

Reading the articles

At the end of this crawling process we have a collection of articles grouped into a  ‘story’ ready for analysis. Quite a few things happen during analysis starting with extracting metadata. Article metadata are items like its title, author(s), publisher, the time published and whether or not it’s an opinion piece. 

We then extract the article’s text content. We’re not interested in menus, advertising or images. What we want is the raw text content that makes up the piece. This process is rather technical and has several components itself so we’ll cover that in a post of its own. Worth mentioning, however, is that from time to time our model might leak some text that wasn’t part of the content into the analysis. Most often these leaks are image captions from the article. The ultimate effect of this is that we sometimes show a ‘more detail’ item which isn’t really relevant. We’re always working to improve this and are regularly reducing the occurrence rate.

How do we find details?

Once  we have the article’s raw text content we split it up into sentences. At first this might seem really simple, just split on the period, right? However, it’s one of those things that sounds easy but has labyrinthine complexity when you dig a little deeper. For example, what about a prefix like ‘Ms.’? Or an acronym like U.S.A? Or what about an acronym that someone just made up and placed right at the end of a sentence? Despite the challenges, we do eventually get nicely split sentences out of the article.

Why sentences though? When considering what makes up a ‘detail’ in any news story we scoured thousands of articles to see how journalists present information. A detail is some event or something that was said. Ideally it would contain context like who said the thing, and to whom it was said, and where, and why. The typical presentation for an entire detail like this is a sentence. Sometimes the context is added in adjacent sentences, forming a paragraph. We were faced with a choice, should sentences or paragraphs be the ‘atom’ when considering details? We went with sentences for a simple reason, the majority of paragraphs we researched contained more than one detail across its sentences. If we tried to analyze details this way we’d end up with all kinds of strange behavior since the semantic meaning of each detail would be mixed.

So, sentences it is! We now have a collection of them for every article in the story. Next we cluster them together across the sources in order to find consensus on their semantic meaning. That’s a mouthful, so what does it mean? 

In a news story the different sources are all reporting on the same thing, some might have fewer or more details than others but there will be a lot of commonality. We want to find all the details that have several sources covering them, that’s the clustering and consensus part. Additionally, we want to find these clusters regardless of the exact wording each source chose for its sentence. For example, let’s pretend there’s a story covering a new scientific paper on the effect of a Nutella-only diet. One detail may be that participants reported a marked increase in happiness in their daily lives. One source may write “survey respondents consistently showed an improvement in happiness” while another source may write “participants demonstrated a 10-20% increase in happiness when surveyed”. Despite the difference in words these are the same thing and we want to capture that. That’s the semantic part.

How we actually do this is horrendously technical and will be saved for our next post covering all those nitty-gritty details (pun intended). The level of consensus we need in order to call a cluster of sentences a detail depends on how many articles we have. Not every story is as earth-shattering as the Nutella diet one, some only get covered by a handful of sources. When we have less than 10 sources we only need 2 articles to form consensus with matching details. If we have up to 50 articles then that threshold is increased to 7 articles containing a shared detail. Beyond 50 we require at least 15 articles to present a detail for that detail to be considered. 

There’s no side-stepping that our choice of consensus levels is subjective. Every month we revisit these numbers and try to do better, what we have so far was chosen from trial and error with typical news stories.

You might be asking, but what about that one source which has something special the others didn’t cover? Unfortunately that will be left out of our analysis. There is no way for us to verify if that detail is at all valid or relevant to the story. During a breaking story this might cause us to miss things, however after just one hour of the story’s life we have enough to form consensus since sources tend to copy each other. 

There’s more to how we form consensus though. Here’s something fun a clever news conglomerate could do. Let’s say our conglomerate (we’ll call it Shoes Corp) has several dozen publishers in their organization. Shoes Corp could instruct each of these publishers to write the same superfluous details in order to trick our analysis software into thinking that they’ve covered some special detail. This would lead to these organizations receiving a higher score than others (more to come on that) and would unfairly favor Shoes Corp. To combat this twisted gamification, we adjust the scoring weight of each detail based on the number of unique sources that covered it. We maintain a database of correlated sources to do this.

At the end of this process we have the text content from every article and all of the details we found in the whole story. Now we go through each article and look for which details it did not contain. For each of these we then try to find a sentence within the article that is somewhat related to that missing detail. With that sentence we can highlight it in the app and give the reader a place to find the missing information with the right context. This is fun since we’re trying to connect the missing piece to something that might not have anything close to it in the article at all. Despite this we get it right most of the time but we’re always working to improve this feature in particular.

How do we find misleading text?

Next we look for potentially misleading pieces of text in each article. This can be a slippery slope, the bottom of which terminates with a sheer cliff. One person’s idea of misleading text might not be the same as another’s. Much discussion at The Daily Edit centers around this point but ultimately our plan of attack is to never consider anything misleading unless it can objectively be shown by the actual text we highlight.

This means that we do not highlight hyperboles, false dichotomies or straw man arguments. Instead, we highlight things like missing data references (“a recent study shows” – without a reference to the study), missing sources (“according to an anonymous source”) and scare quotes. Each of these can be verified by the reader just by looking at the text we highlighted. Either the data was referenced or it was not. Either the source was named or it was not (and it’s ok to not name a source, we just want to increase awareness).

We achieve this by matching grammatical patterns on each sentence. Each time we find a match we add it to a scratchpad for further review. Later in the article we might find another piece of text that does in fact clear a previous match. For example, an article might have a data reference in its first paragraph but only mention where it came from in the last paragraph. In this case the data reference is valid and we shouldn’t highlight it.

Making this happen led us to creating a small programming language which allows us to describe grammatical patterns with lots of complexity in a very concise way. It supports 60 languages so far! Since it’s a complex tool itself we’ll leave its description for a post of its own.

Alas, we do not always get this perfect. Languages are tricky and there are myriad ways to construct a sentence, so from time to time we’ll highlight something erroneously. If that happens then please report it in the app so we can improve things further.

How does this lead to a score?

OK, so now we have all the articles, details, missing details and misleading text. The poor computer is exhausted and just wants to go home and sleep. However, it has just one more thing to do before it can clock off. It has to produce a score for the reader to compare sources. 

Heads-up, this is the most subjective part of what we do, please send us any and all feedback you may have so that we can make something that works for everybody.

Article scores are made up of three components:

  1. The coverage score – this is the percentage of all details found in the story set that a particular source covered. More is better.
  2. The misleading score – this is a percentage derived from the number of potentially misleading pieces of text we found in an article. More is worse.
  3. The trust index – this is just a simple arithmetic combination of the above two scores.

We compute the coverage score by creating a ‘weight’ for each detail we found. The weight is just the number of unique sources that cover that detail. As per the Shoes Corp example above, their publishers would all count as a single source when calculating the weight. We then add these all up to get the maximum possible coverage score. We then compute each source’s coverage score by dividing the weighted details it contained by the maximum score. So if you see a source with a coverage score of 100% then it did a bang-up job of covering the story, give that journalist a Pulitzer.

The misleading score is much simpler. Each highlighted region of potentially misleading text adds 20% to the misleading score with a maximum penalty of 100%. This means that five highlights gives that source the worst possible misleading score. This sounds bad but most journalists are pretty good so it’s rare to see more than 40%.

Now we come to the trust index. Choosing how to combine the two previous scores to form this is an ongoing discussion and has seen several iterations so far. One question tends to drive it however: 

What’s better, an article that covers the whole story but is a little misleading or an article that is pristine but misses a few details?

Over time we’ve settled on favoring articles with more coverage, since more coverage tends to lead to a more balanced view of the story. If they include a couple of scare quotes then so be it, the reader is still better off than only seeing half the story, plus we highlight those scare quotes in the article so they can make their own informed judgment. Based on this the computation is very simple, the coverage score makes up 80% of the trust index and the misleading score determines the remaining 20%. If an article covers every detail and has no misleading text it gets a perfect trust index. If it has five or more misleading pieces of text but perfect detail coverage it gets 80% (since 20% is lost due to misleading text). And so on.

What about machine learning bias?

Much of what I’ve covered so far depends on the output of machine learning models and no discussion of these is complete without covering bias. We’ve all read about machine learning bias ruining models (Google did it, so did Facebook) so how does this apply here?

Our models are trained on very large corpora of text. These are selected to give the broadest possible coverage of news stories in the wild. Despite this, the inherent structure of news stories can lead to bias seeping through the model. 

For example, what about an entirely new technology covered in an article with a truly eccentric style of writing? This would never have been encountered during training. In our application the worst this leads to is a source’s sentence not making it into a cluster. This will cause us to show that detail as ‘more detail’ in the app on that source’s article despite it already being there in some funky way. 

This has two effects, first it means that we waste a few seconds of the reader’s time by showing them a detail that they can already see. Second, it means that we penalize the article and give it a lower score than it would otherwise have had. This isn’t optimal and it’s not easy for us to know when it happens so if you encounter this case in the wild please let us know so that we can improve our model’s training data in the future.

Compared to the two examples in the links above however, we see that the effects of bias in our machine learning models don’t pose a risk and are a minor annoyance more than anything else.

Conclusion

Whew! Almost 3000 words and here we are, finally, at the end. In an industry as thorny as news there is no trust without transparency, so we hope that this post has helped show you at least some of the lengths we go to at The Daily Edit to give you a better news reading experience and more media insight.

This will forever be a work in progress as the news itself changes so please send any feedback and questions you might have. We’re always open to discussion and debate on any topic.

Over the coming weeks we’ll publish more posts explaining each of these components with all the technical detail.

How the mission of media has evolved, making media outlets less trusted.

Recording stories and sharing information dates back to the beginning of humankind, with ancient cave paintings, maps and carvings. Yet, today the speed of information travels immensely faster with new technologies and a demand for instant gratification. From the printing press to telegrams to TikTok, the innovations enabling information sharing have evolved rapidly. 

Mass communication, or exchanging information on a large scale, is not a new concept, but in today’s world, absolutely anyone can produce and consume content. This now means that not all mass media is trustworthy.

But it hasn’t always been this way.

News anchors: from “most trusted man in America” to evening entertainment

In the 1960s, Walter Cronkite, the anchorman of the CBS Evening News, was surveyed as the ‘most trusted man in America’. With his famous saying “and that’s the way it is”, he was known for his mission to provide unbiased, factual news that he felt people just needed to know. At that time, news media was viewed as a public service. 

But in 2022 a dismally low proportion of people, just 26%, say they trust the mainstream media. In addition, media outlets have become more divisive in their storytelling than ever. So, how does mass media go from being one of the most trusted forms of information to a divisive, biased, and untrusted platform?

The answer lies in following the money.

In Cronkite’s day, networks made their money from entertainment platforms and news programming often incurred a loss. At that time, showing the evening news in an unbiased manner was a requirement for television stations to be able to show other profit-making programs. But that all changed in the late 1970s and early 1980s as network executives started to see dollar signs around news programs. In the 1980s, America’s wealth reached a peak and the climb for dollars accelerated.

The turning point was an incident in 1979 which opened network executives’ eyes to the earning potential of the evening news. On November 4, 1979, some students and militants stormed the American embassy in Tehran and captured 52 Americans. The Iran hostage crisis was broadcast on ABC News which saw Americans glued to their screens every evening. For the first time, the news was drawing in more eyeballs and interest than entertainment programming. ABC News made the show permanent, renaming it Nightline, and other networks soon followed suit with shows of their own. Nightline continues to this day, but the stories it covers have evolved. Nowadays, they cover everything from regular nighttime news to the lives of celebrities. 

Advertising buys in

The next big change in news media evolution was the growing dependency on advertising. With more viewership, the evening news programs became attractive advertising platforms. Slowly but surely, these advertisers set a new normal of buying not only their space but also agenda-setting stories. 

Advertiser agenda-setting has been going on in media for decades. With media dependent on advertising budgets, the advertisers over time placed pressure on editors to avoid stories that would conflict with their interests. For instance, in the early 1970s when The Daily Iowan started to share anti-Vietnam war content, sponsors withdrew funding leading to internal pressure to change the tone. 

In 1992, 90% of editors at daily newspapers had experienced economic pressure from advertisers around agenda-setting, and over one-third of them had given in. This trend continued with 30% of journalists admitting to softening or avoiding stories that could negatively affect advertisers in 2000, and a 2021 study found most editors experience significant pressure from advertisers on what goes on in the newsroom. 

This problem is not going away anytime soon. In 2022, nearly 70% of domestic news revenue comes from advertising. There are even several recent cases of advertisers stepping in to control news stories on topics like COVID-19, fossil fuels, and marijuana. 

The threat of advertisers withdrawing their support has led media outlets into struggle more than ever before. So, unfortunately, following advertiser directions is necessary for them to survive. In many cases it is a requirement for them to keep their jobs. 

Changing regulations allows for more media bias 

Besides money, another big change occurred in the 1980s that allowed broadcasters to become more divisive: the elimination of the Fairness Doctrine. A 1949 decree from the FCC that required broadcasters to present balanced and opposing views on an issue, the Fairness Doctrine prevented media bias. However, within just 40 years, the FCC overturned the Fairness Doctrine in 1987, clearing the way for partisan and polarized stories that were better for business than for public service. 

But, even if we had the Fairness Doctrine in place today, it was only designed for broadcast media so applying it to social media or other mediums would be another challenge, if not requiring totally new regulation. 

Technological innovation and information overload

Humans have always relied on communication, but the form of that communication has changed drastically over time. Newspapers have been around for centuries, dating back to ancient Rome. However, the advent of the printing press in 1440 further expanded newspapers’ reach. 

In the last two hundred years we’ve seen more technology developed to advance mass communications than we have in the last two thousand. The telegraph, which arguably shortened the length of human communications more than any invention prior, was invented in 1774 by George Louis Lesage

Furthermore, digital innovations would transform mass communications permanently. Personal computers were invented in 1970, soon followed by the development of email in 1972. The internet was considered official soon after in 1983

The rise of social media in the mid-2000s gave us Facebook, YouTube, Twitter and Instagram. Nowadays, anyone can post anything online. Technology is involved in every aspect of our lives, from smart home speakers to our cars to our telephones, we are constantly surrounded by it. Media consumers have adapted to immediate gratification, and they expect news stories almost instantly. 

While this is not inherently bad it’s truly fantastic that we have access to more information at our fingertips than ever before it makes it much more difficult to sort through the news and to know if sources can be trusted. Plus, studies show that most people overestimate their information literacy, or the ability to find and evaluate information and sources. This means they may be trusting media sources when they shouldn’t be. 

However, the solution isn’t to remove technology from the media; it is already too intertwined. The solution is to use technology to enable more transparency in mass media and give unique insights to media consumers. 

So, what now? 

The media is supposed to arm us with information to make decisions about the world around us. But, even media can be bought, and regulation does not currently require that media present unbiased facts. Promoting diversity and unbiased coverage is not censorship. It’s how the news should be. 

That leaves consumers of media in a Catch-22. On one hand, you need to know what’s going on in the world. On the other hand, consuming biased content will not get you the full picture. Plus, the constant bombardment of information is difficult to sort through. 

That’s where The Daily Edit comes in. The Daily Edit is a platform powered by machine learning that compares world news to expose bias and misinformation. Only The Daily Edit app can empower users with confidence that every story they read, hear, or watch is the full story, levelling the playing field against forces of misinformation and disinformation. The Daily Edit is a technological solution to a socio-technical problem, exposing media through transparency scores that give consumers real time media insight. 

Featured image by the Austrian National Library

Do you ever think the media is lying to you? Or are you mistrusting of the news? If so, you’re not alone. According to the 2021 Edelman trust barometer, less than half of Americans trust the mainstream media. 

While you cannot control the transparency of the news, you can control your media literacy, or critical analysis of the news, which will help you become better informed. So, with more news available today than ever before, how do you know if it’s legitimate? 

Sometimes, it’s less about the volume of news you consume and more about the quality of news you consume. Learning how to spot media bias and fake news can help build your media literacy, making you better informed. Here are a few tips to get started improving your media literacy. 

What is Fake News?

Fake news is any misleading information that is passed as legitimate truth. It may be intentionally a false story, or the storyteller may take some legitimate news out of context, exaggerate certain elements of the story, or an inaccurate telling of the full story. 

What is Media Bias?

Media bias is a partiality to a story that can be intentional or unintentional, caused by journalists or other storytellers. There are several types of media bias, including omitting facts, selecting impartial or incomplete sources, where and when a story is shared, and spinning a story to make a certain perspective appear better. But, regardless of the intent behind it, media bias makes it difficult to get the full truth behind a story. 

Ways to Tell if a Story is Fake News or Biased

Missing parts of a story or experiencing media bias affect the way we think about the world around us and current events. Without knowing the full picture, it’s difficult to form a fully informed decision. However, spotting media bias and fake news is easier said than done. 

Here are a few questions to ask when spotting fake news:

Also, a few questions to ask when spotting media bias:

But, if you want to be sure that you can spot media bias and fake news, try a technology that is free of bias (because only a transparent tech, without human emotions, can fully remove bias). That’s where The Daily Edit comes in. 

Fight Media Bias with Technology 

The Daily Edit is an algorithm-based, comparative news information platform powered by machine learning to measure the world’s stories—so you always have the full story. Get real-time media insight into current stories, so you always have transparency about the news. 

Our proprietary technology processes media outputs around the world, evaluating every detail, for integrity and bias. Then, we score media based on our “Trust Index,” making every story a data story.

Featured image by John Schnobrich