This week we take a look at one of the tastier fallacies - Having Your Cake. Have you ever encountered someone who seemed like they were playing both sides but couldn’t figure out what was up? Then you’re in the right place. Oh and somehow we get a little camping in too.
Let’s face facts, shall we? There’s a whole lotta stuff I don’t know. Why is ice slippery? Why do we drive on a parkway but park on a driveway? How long should I wait after eating to go swimming? But nothing vexes me more than why in a peanut butter and jelly sandwich the piece of bread with the jelly on it goes on top.
A variety of polls show that the jelly should go on top. However, an exhaustive search (like the 1st page of Google) turned up only some vague ideas. Something about heavier ingredients or taste sensations on the bottom but nothing was definitive. Science is oddly silent on this item and yet they are the same people who figured out how bumble bees fly. So why are there no in-depth studies on this or even an undergrad study project? When presented with this type of problem, there is only one possible solution - Aliens.
It’s the only answer that makes any sense at all. Aliens, who we all know built the pyramids, also left us the PB&J and they designed it so that the jelly goes on top. The reason no scientists will study this is that to look into it would be to confirm once and for all that aliens exist. And they just aren’t allowed to do that. Big Jelly has total control and the aliens running it want to stay hidden. See, this is the only explanation that works.
Hopefully you thought about that and went “Um what?”. Yeah, I stepped in all sorts of logic problem buckets. Let’s take a look at them.
I’ll start with basing a conclusion, aliens, on the fact that I don’t understand why jelly is mostly top bread based. This is a personal incredulity fallacy (or argument from ignorance). It basically means that a conclusion is based on the fact that there is a lack of evidence or not understanding the existing evidence. As weird of a fallacy as this seems, it shows up far too often. The universe isn’t expanding because I’m not getting bigger (not understanding that universal expansion happens in space over large scales, not locally). Then the leap I made – aliens must exist because there is no evidence for why jelly goes on top. This ignores lots of perfectly rational explanations such as: the data is bad and jelly isn’t more often on the top than the bottom, maybe it is based on ingredient flavor and position when eating, perhaps it’s just cultural phenomenon, etc.
Wow, what else did I do? Well, I also jumped to a conclusion and in the process stumbled into a black and white fallacy. In essence, I said either jelly physics has a reasonable explanation that I would understand or aliens did it. I left no room for anything in the middle or even the chance that new information might show up in the future. Since you can’t prove it wasn’t aliens, I also made this a non-falsifiable argument. I’m right because it’s impossible for you to prove me wrong when in fact it should be up to me to prove aliens are real and controlling vital PB&J technology. That’s 3 logical errors in row – I’m really on a roll!
For good measure I threw in some conspiratorial thinking. When there is a non-falsifiable claim it’s nice to also have a conspiracy on your side in case anyone just happens falsify it. Then, you can just say “they were in on it” or “someone must have gotten to them”. Assuming a conspiracy exists can happen when we don’t see that our logic was faulty and we just weren’t thinking clearly enough. If we recognize that our logic was faulty, then we can make ourselves more open to seeing other possibilities and likely find there is no need for a conspiracy.
Its entirely possible I made other logical errors, but these are the primary ones and by examining them my claim is turned into nonsense. Here’s the important point of critical thinking I want to make sure is regularly mentioned – I don’t have to feel bad or ashamed because I held an incorrect view or opinion. Instead, I’m happy to recognize these issues and work towards a better understanding of the subject. Too often we’re told making a mistake is bad and we should never admit to it. That stops us from growing – learning through mistakes is a vital part of our education.
People keep telling me I drink way too much coffee. They say I really should drink less than 12 cups a day (just for the record, this is for illustrative purposes only!). But I’m smart and informed and ready with a comeback. I know that “less” is applicable when discussing mass or weight and what they really mean is “fewer” which applies to counting. I can confidently respond “Aha, since you said less, then you are wrong and my coffee habits are fine! Good day sir.”
Of course that makes me a jerk (again, for the record, I don’t actually do that, illustrating and all). It also makes me a pedant or engaging in pedantry. This is when a person is overly concerned with minor details and rules, and it shows up in language quite a bit. That grocery store sign that says “12 items or less” should really be “12 items or fewer”. We all understand what is meant and get on with our shopping though, we don’t need to know the difference.
I will argue that knowing there is a difference is, however, useful. In a grocery store context less and fewer are easily interchanged. If you are reading a scientific article the author might be using these terms with their exact meanings implied. These terms can show up in dieting – less than 12 grams of sugar per day and fewer than 3 servings of ice cream. By understanding the differences we can more easily see meaning and nuance. If we run across them used incorrectly we don’t have to get bent out of shape.
Another fun but useful bit of pedantry is the meaning of terms venom and poison. Venom is a Spiderman villain and Poison is an 80s rock group. No, wait, that’s not it. Venom and poison are actually the same thing but differ by their method of application. If the substance is injected, then it is venom and that’s how we get venomous critters – they bite or sting to inject the venom. Poisons, on the other hand, are something that we ingest (or inhale). So, by these rules, if I extract the venom from a snake, drink it and it harms me, I poisoned myself (I don’t recommend doing this). What if I extract the venom, put it into a syringe and inject it into myself, I guess then it’s still venom. The important point is that there is some kind of toxin in my body that probably isn’t good and I should see a doctor.
Drat, I said toxin up there, didn’t I? This is a tough one, especially these days. It’s a word that normally has a pretty broad meaning but has now become a thing unto itself. We all need to quit taking in toxins and we need to get the toxins out. Here’s the confusing bit, toxins aren’t a thing. Everything is toxic to us, it just depends on the amount. Oxygen, yep, too much will kill you. Water, yep, that one too. Vitamin C – you’re gonna need a lot but it does get toxic at some point. Other items are simpler to determine: hydrochloric acid is bad even though we all have some anyway in our stomachs, alcohol is good at the start but too much is bad, radioactive substance are generally not good, tide pods we know aren’t good at all. When we hear mention of toxins in our bodies we should immediately ask what substance is it that we have too much of because toxin is more of a catch all term.
Understanding the differences in words is not of itself a bad thing. It’s ok if someone calls a snake out on the hiking trail as poisonous, we don’t really need to correct that. If we’re talking with a physician, herpetologist or poison control center then we may want to try and use these terms with their more exact meanings. Knowing when to look for and use the more detailed meanings of words is the useful bit of knowledge to keep in mind. The real kicker here will come when someone tells me I didn’t define pedantry accurately enough.
I recently completed a first of its kind, revolutionary, ground breaking, extraordinary study to once and for all determine exactly what the best kind of cookie really is. And the winner is … chocolate chip! Yes, that’s right, the humble chocolate chip cookie is the hands down favorite. Let us all rejoice and rest easy now that we’ve finally put that great mystery to bed.
If your first question when reading this is “Where can I get some cookies?” well then, you’re right. But you should also ask “Hey, how did you conduct this study?”. Likely you’ll get an answer that falls into one of three types. Depending on which type you get could change how much trust you put in the results of a survey or study.
It’s entirely possible I will tell you that the methodology used in my study is confidential and a trade secret. If I was to tell you, you could do your own studies and thereby take money out of my pocket and then I couldn’t buy cookies anymore. Finding out the methodology for a study is somehow secret doesn’t automatically make the study suspect, but it should set off some loud warning klaxons. When you know that you can’t review the methods used it means that you have to take the provider’s results pretty much on faith. Maybe the operator of the study is trustworthy and accurate and this is fine. However, this tact is also used to hide problems or lead recipients into a desired conclusion. It’s also possible the study was flawed or perhaps wasn’t even done in the first place. Since you can’t see it, you can’t tell.
I could be totally cool and give you all the details of the study. In this case, you’ll know that I used 24 types of cookies, sourced from award winning bakeries that used only top ingredients and each cookie was made within 24 hours of eating. The participants were a group of 61 people from diverse cultures and genders and age ranges. Oh, and it was in space. Like outer space. You are perfectly free to repeat my study, but it was conducted in such a way that it is difficult if not impossible to ever repeat the exact conditions. Since you know all the details perhaps you can trust the results. Since you know the study isn’t reproduceable it is fine to consider the results a bit suspect. Perhaps there were reasons the study was designed to make reproducing it difficult.
Finally, I could tell you that I purchased 12 packages of common brand name cookies in differing varieties (and provide the details of each) and that I assembled a group of 25 randomly diverse people (again, giving the details of the group like gender, age, ethnicity) and asked them to sample each cookie and rank their top 5 in order. Then I picked the most often occurring one. This is great, you have everything you need if you want to conduct this study, and it’s entirely possible to do so. This is what we like to hear. However, what happens far too often is you find out there was only one study done. No one else did this to see if the results were the same. When multiple people do the study and get the same result, then it’s trustworthiness goes up. Since there was just the one study, how much do you trust it? If you are planning on starting a cookie business and investing your life’s savings on this one result, you might want to rethink that decision. Wondering what cookie to buy this week at the store, maybe you’re OK.
We see a lot of news that starts with something like “Latest study shows …” but we’re not often told much about the studies themselves. Some could fall into the first category where the method is purposely hidden. You might see this in cases where someone purports to have defied gravity or created a perpetual motion machine – only you can’t see how it was done. Be wary. While my second type seems a bit fantastical, it’s pretty common in the science world simply because some things are hard to do. A number of studies are literally done in space, so doing them again is extremely cost prohibitive. Or they require very specific conditions and knowledge. This doesn’t immediately invalidate the results, but it does mean we should interpret it as not fully verified or even likely to change in the future. The last type is also very common, and very pervasive. Quite a number of studies were done once, got a result and never repeated. There was no nefarious reason for not repeating them, it just never happened. We should certainly be aware of these so we can more accurately make decisions using their results. A number of psychological studies have wound up here, and many of their results are used commonly.
The moral of the story here is that you might need to study the studies. For now, I need some cookies.
It’s spring when I’m writing this and I’ve been out getting to lawn work, cleaning up the property and taking the time to see everything bloom. The few fruit trees we have are certainly flowering and already my mind is wandering forward to picking and processing. While stopping to smell the roses I was reminded of an odd study that proved a diet of just tree bark was effective for weight loss. The study showed that eating a diet of mainly tree bark helped participants shed the pounds.
That sounded a bit weird, so further digging was required. It turns out that this study was done on 25 people – a small group but this is an odd bit of research so we’ll allow it. Participants were asked to make tree bark at least 65% of their intake every day for 4 weeks. They could do what they want with – straight crunching, puree, paste, boiled, mashed, fried, anything. Of the 25 people, 2 actually did lose significant weight and that’s what was reported. But what about the other 23 you ask? Of those, 12 reported no change, 4 actually gained weight (maybe they did the fried bark), 5 became extremely ill and had to quit (you really shouldn’t eat tree bark!) and the last two started mumbling incoherently about Groot. This is a classic example of cherry picking – the study found a few examples that proved its effectiveness and then either ignored or under reported the rest.
Doing all that research made me hungry so it was time to find some pizza. It is the accepted post research food! A quick check told me the two styles are Chicago Deep Dish and New York. Since I’m in the Northeast US I have to like New York style. Even thinking about a Chicago Deep Dish is grounds for incarceration. But wait, I thought, this is pizza, ubiquitous and delicious, don’t I have more than 2 options? I could go for a stuffed crust which to me seems like deep dish around the edge with NY in the middle. Or tomato pie, also a specialty here in the Northeast US that’s just bread with cold tomato sauce topping which is sort of deep dish, I think? Meat lovers 4 cheese thin crust – sure it’s NY style but there are so many toppings it’s pretty thick. I suddenly realized I had a false dichotomy on my hands. I was told there were only two options, when in fact there were many. So, I had fried chicken.
After that little snack it was time to get back to the yard work. Since I live on a farm, sometimes people stop to ask animal and farming type questions. Today I was asked if I had any plans to raise beefalids, to which I responded “uh ….”. I said this sounds like cross between a cow and a worm and I’m pretty sure it doesn’t exist. The response I got was “Sure they do”. So, I asked to see one. At this point the person said it wasn’t their job to show me one, but since I’m the one who doesn’t believe they are real then I have to prove they don’t exist. Darn that proving non-existence fallacy! You see, its generally very difficult, if not impossible, to prove that something doesn’t exist. Instead, the burden of proof should always be on the one stating the existence. I don’t like to see someone leave angry, but this person headed off in a huff muttering something about stopping for a worm burger.
These are examples of fallacious arguments. They exist everywhere and they aren’t always done to deceive, sometimes we don’t even know we’re using them. Such an argument doesn’t necessarily mean an incorrect conclusion, either. It’s quite possible that beefalids do exist, but someone must prove it first as opposed to make me disprove it. In too many cases, though, someone might be looking to trick you and it’s good to look at their arguments and evidence. Studies are especially prone to cherry picking where the conformational data is reported and the rest is ignored or down played. The current US political climate is heavy with false dichotomies, telling us we have only two choices, when in fact things are much more nuanced and grey. Proving non-existence may seem silly but it’s all too common especially when the goal is to refute an established fact – I sleep on a pillow full of oregano to improve my complexion and you can’t prove it doesn’t help me so therefore it does.
With a finite set of topics which I could discuss its inevitable that I’d eventually have to talk about probability, isn’t it? As I write more articles that don’t feature probability you have to think that the next one will feature it. The longer I go not writing about probability, the more I’m due, obviously.
Are you sure about that? Is there anything else going on here that could lead us to the wrong conclusion? Probably.
We’re going to have to go back to the roots of probability. Of course, I’m talking about toast. Let’s assume that we have a piece of fair toast with a burned image of Stephen Hawking on one side. A piece of fair toast is one that if tossed in the air has an equal chance to land on either side. I toss that piece of toast and it lands Stephen Hawking down. I toss it again and it’s Hawking down again. One more toss and the late great professor is still staring at the floor. Surely on the next toss it’ll land with the renowned physicist looking to the heavens. We’re due to see him!
Probabilities are sometimes viewed as hard to understand, but that’s not true. The basics are pretty simple. When we temporarily levitate our bread over and over, each toss is independent of the one before it. That means the results of the first toss do not affect the second, the second does not affect the third, and so on. This is also known as classic probability. Hawking’s toast, when tossed, has a 50/50 chance of landing physicist up. If we try it 5 times and dear Stephen always stares at the floor we may feel like there is a greater chance he’ll be face up on the next try. But that’s not the case as it’s still just 50/50.
Our trusty human brains are always trying to make sense out of nonsense which is very helpful in keeping us alive. Sometimes it can lead us astray though. These classic, or independent, probabilities make us want to think of them as dependent. We know the toast can’t always land astrophysicist floorward and any time we see too many examples like that we start to think the next can’t be the same, that we’re more likely to get a Hawking. This is what we have to be on guard for. Our brains want us to think the independent actions are actually dependent.
Lots of things that we think are dependent probabilities aren’t. A sports team with a losing streak is due for a win simply because they’ve lost a bunch. But really, the number of games lost doesn’t affect the next games outcome. It’s more likely that player health, caliber of the opponent, venue or other factors would affect the outcome. Weather is another example – it’s been so dry lately we’re due for some rain. However, the number of dry days does not affect the next days weather. I know that weather is extremely complex and we could argue that there is some effect but it’s so tenuous and difficult that the simpler answer is that one day’s weather does not affect the next. Ask anyone in a long-term drought. A poker player who’s had some bad hands may feel they are due for a good one but if the deck is shuffled between plays they are no more likely to get a better hand than on the last one.
There are conditional, or dependent, probabilities as well. And we have to go back to toast for this, or more generally, breakfast. Have you ever gone out for breakfast and only had the server bring syrup if you ordered pancakes? If so, then what is the probability you’d get syrup without ordering pancakes? Maybe it was in error, or it’s just standard practice if you want more maple-y bacon. The probability of getting syrup is based on the type of breakfast ordered. Syrup is dependent on pancakes. This doesn’t mean you won’t get syrup if you don’t order pancakes, but that you are less likely to get your distilled tree blood without first ordering height challenged cake.
Was I really due to talk about probability? No, there is a very large set of topics I could discuss. With the passing of each article, the likelihood of my delving into probability would not change overly much. If you hear a probability brought up just ask yourself – is it independent or dependent. Independent – then the odds never change. Only dependent occurrences can affect the odds. With each toast flip we may wish we could see more Hawking, but alas we’re no more likely than the last time.
Here’s an oddity I ran across that I bet you didn’t know – people on average will do 3 silly walks a day. A silly walk is defined as any walk not the standard one foot in front of the other such as: skipping, hopping, galloping, shuffling, moonwalking, sliding, crab walking, levitating, etc. Pretty weird right that we do this. I mean, who would have thought.
Speaking of mean, that brings up a question. What does average even mean? Well, doing some research we find that it’s just the sum of the values divided by the number of values. In our silly walking case we have some number of people asked and how many ambulatory variations in total they performed. Sounds pretty simple to do.
Our study included six participants (I know, kind of a small sample set but this is just for demonstration). Five of the those asked performed zero silly walks (wait, what?). The sixth person, who it turns out is a huge Monty Python fan, does an astounding 18 silly walks per day. If we do the quick math the total number of silly walks is 18 (just the one person since the rest did zero) divided by the 6 people which gives us 3. You should be thinking, um wait, only one person did the walks, but the average person does 3 yet the rest did zero. Something doesn’t seem right.
You are totally correct. Averages are simple yet they hide a dangerous flaw. They like evenly distributed data. Now I’m trying to keep these articles math and statistic lite, so we’ll keep the definition of distribution pretty simple. In evenly distributed numbers, there isn’t a lot of variation from one number to the next when they are sorted (stat people, I know this is super simplified). Our data would look like: 0,0,0,0,0,18. Everything is good until those last two when we jump from 0 to 18 – it’s a big change. That’s what’s called an outlier – that means it lies outside the range of the rest of the numbers. Sets of data like this can really mess with the average and render it pretty meaningless. Obviously, most people do no silly walks.
If the data was better, we might have seen it look like this: 1,2,2,4,4,5. There are still six observations and they total to 18. So, while no one asked performed an exact 3 silly walks per day, 3 is the average. We can see that there is no large change from one number to the next in the ordered set, so they are pretty evenly distributed. In this case, the average is probably much more trustworthy.
The important question to ask when you see an average then is what the underlying numbers looked like. Were they even distributed? We’re there any outliers? If there were outliers, we’re they removed (sometimes it makes sense to remove them, other times they are important, but that’s maybe another article). Averages surround us. If you start looking you will likely see them everywhere. On average, how many do you see a day? The danger is that you probably know nothing about the underlying data and you can’t assume the person making the number did either. Averages are easy to abuse to make something come out the way you want even if it’s not correct just like I did with the silly walks data.
Averages are just one type of method for checking the centricity of a set of data. You may often hear average also called “mean” – that’s the more accurate name. Re-read those first few paragraphs for some foreshadowing. The other two are the median (what number is actually at the center) and the mode (what number occurs the most often). I give this bit of reference as we’ll probably see them later.
Like most critical thinking exercises, if you can see the underlying data or at least know the sample population and logic used, you might find the number is just fine. If you can’t find the data or no logic is given, there’s that flag to say it might be suspect. This is especially frustrating when you hear averages thrown around on a news program and you have no way to check it. If you find yourself with the chance to question the data, go ahead, on average you might find the answers enlightening.
I’m still a bit new to this whole writing (and maybe educating a bit too, no?) thing. That leads me to look for information on how to write better, get into the flow more quickly, keep my thoughts aligned, engage the reader, etc. And there are a whole host of helpful articles and speeches out there such as : 5 best socks an aspiring writer should wear, dental habits of successful authors, hat choices to make the most of your next blogging experience, and so on.
What does reading articles like these make me do? If you said “think”, then you’d be right. If you said “eat chocolate chip cookies”, well, you’re also right. I might find out that 80% of bloggers that were successful always wrote while wearing a beret. What none of those pieces ever tell me is what did 80% of the unsuccessful writer’s wear? It’s entirely possible that they also wore berets. These types of articles never tell us the opposite.
Learning to ask such questions is known as “testing the negative” and it’s a really important concept. Plus, it’s one of the best ways to help avoid the dreaded “confirmation bias”. We inherently like things that agree with what we think. A recent study might show that employees who nap at work are more productive. What? I like naps, why can’t I do this? But what is the flip side of the study? How did those employees who didn’t nap fair? Perhaps they had just average productivity but accomplished the same amount because they weren’t sleeping.
We are inundated with positive only articles. Why? Probably because we’re supposed to buy something. Or maybe it just wants us to click on it (see episode 1). The trick is to learn to ask yourself what is the inverse in the statement. Did you know that 75% of Fortune 500 CEOs have two cups of coffee every morning (numbers for illustration only)? Well, what do non-fortune 500 CEOs drink? Could they have two cups of coffee too? Or, how many people who drink two cups of coffee are Fortune 500 CEOs? There are often several different ways to look at a number or results of a study, but we’re usually just given the one that confirms the statement.
This type of thinking is useful in a wide variety of situations to keep us from blindly doing or agreeing with things. As someone who is aspiring to do a bit of writing, I’ll hear that a good way to get better is simply to get up early and write for at least 1 hour every day. OK, sure, sounds plausible. I’ll bet there are people who did this and got better. But how many people did this and it made no difference, or perhaps even made them worse (maybe stress and schedule actually reduced their creativity)?
You can also learn to counter anecdotal wisdom with this kind of thinking. Putting on my pants left leg first has made me the ink connoisseur that I am. OK, but how many ink connoisseurs put their pants on right leg first? How many don’t wear pants at all?
Testing the negative isn’t just a way to throw out bad information. It can also serve to reinforce something. 70% of people who engaged in at least 20 mins of strenuous exercise each day had better overall lung capacity? Well, we could ask what percentage engaged in the activity and saw no increase or even a decrease and if we found that 25% saw no increase and 5% saw a decrease then we know the 70% probably isn’t too bad. Exercise is generally good for you. We could ask how many people engaged in no strenuous activity each day and saw an increase in lung capacity. If that is small, maybe 5%, then we know that the inverse is helping to show the validity of the main claim. Again, these numbers are for illustration only.
So, go ahead, be a little more negative. Testing the inverse is a skill and like most it takes practice (how many got better without practice, you should ask). A bit of the negative might just make you more positive.
I was recently told there’s been a resurgence of interest in county fairs. You know the kind: local vendors, tons of awesomely bad-for-you food, a Ferris wheel, some animals, tasting hot sauces and these days wine sampling. There really is something for everyone and I can enjoy the heck out of them (urge for funnel cakes rising). Seems totally reasonable that their popularity is on the rise.
Of course, as a good denizen of the data world, I can’t leave well enough alone and I just have to ask “How do you know there’s a resurgence? Heck, what does resurgence even mean?”. Believe it or not, I got some numbers. As we know, I live in East Kintertownsylvania (home to the world famous smallest ball of yarn!), and there is one fair held every year. In 2017 the attendance of the 3 day fair was 12,295 people. In 2000 the attendance was 8,374. That’s over a 30% increase, not bad at all.
Is it the whole story, though? I feel like we’re missing something. What was it? Yes, that’s right, what were the East Kintertownsylvania populations at these two periods in time? Just knowing that some number got bigger over time usually isn’t enough to say it’s better. I headed to the local courthouse, went into the basement and pulled out all the microfiche (the year 2000 was before the internet, right?) and discovered that the population was 54,291. Which means, in 2000, about 15% of the population attended the fair.
Jumping forward, I googled the population of East Kintertownsylvania in 2017 and found out it was 76,397. That’s a fair increase (see what I did there!) over 17 years, but everyone knows a yarn processing center opened up there in 2011 bringing in a lot of new people and businesses. In 2017 then, the fair attracted 16% of the population, which is just one percent more than in 2000. Hmmm, not as big of a rise as it seems.
Wait, there’s more. We know the population went from 54,291 up to 76,397 meaning there was a near 29% increase in people over that time frame. We know that the fair attendance only went up by 1% (15% vs 16%). That means fair attendance certainly isn’t growing as fast as the population. Even if we say attendance should have grown at half the population increase, that’s still an additional 8% on top of our 16% (meaning a total of 24%) which means attendance should have been about 18,000.
Unfortunately, then, fair attendance isn’t growing. It’s actually declining. By quite a bit too. The fair organizers might want to look at ways to get more people. I’m thinking free ice cream would do it. Would certainly keep me attending. Especially if its cookie dough flavored.
This article has a bunch of numbers, but there’s no need to fear them or get frazzled. I did some simple things, basic percentages and the like. The important take away is not to just trust some number you are given that might (or might not) show some increase or decrease over time. If someone says they did checks like these and can provide them, that might be all you need to know the number is sound. If you see no supporting info, and can’t find any other evidence yourself, then perhaps it’s not worth trusting. Percentages can be easy to “sniff check” too – you don’t need to do major math (even though all us have high powered calculators in our pockets at all times), it’s pretty simple to just half or quarter a number and see if things look correct-ish. I also made an assumption of half the population increase should have attended fairs – and I stated it. Some assumptions aren’t stated.
Definitions are important and I know I said we should find out what “resurgence” even means. In this case, looks like we don’t have to bother though. I’m sure it’ll come up in another article. For now, I need a funnel cake with cookie dough ice cream on top!
Did you know that an astounding 92% of survey respondents said that chocolate chip cookie dough ice cream was the best flavor? Or that only a mere 11% of people prefer sausage over peperoni as their primary pizza topping. If you are a fan of cookie dough and peperoni then these numbers look great. Shout them from the rooftops!
You already know what’s coming next, don’t you? You’re at least 80% sure I’m about to say “but”. Well, congratulations, you’re right.
But wait, what do those percentages even mean? We often see numbers that look fine and seem like they were based on something good. They’re numbers, right? Numbers are good, scientific and mathematical things. Aren’t they?
Percentages are tricky because they often hide what went into producing them. On the cookie dough survey, just how many respondents were there? What if there were only 13 respondents and 11 of those liked cookie dough. While it’s a small group, still most liked cookie dough. Then I tell you the survey was done at the International Chocolate Chip Cookie Festival (wouldn’t that be awesome, by the way!) and you wonder if there was a bit of bias. You should also be saying, um, 11 divided by 13 is 85%, not 92%, and you’d be right. I also didn’t mention that one of the respondents said anchovies was the best ice cream flavor – because that person got the surveys confused and I excluded the response as an invalid outlier. But should I have excluded it? I never told you what my survey criteria was, or possible responses were, or what I considered valid. That 92% isn’t looking so credible anymore is it?
The peperoni numbers shared above - let’s say there were 100 respondents (a little bit bigger population) and the question was do you prefer peperoni or sausage on your pizza. That leaves a nice binary answer, so nothing weird can show up. And we know that just 11 people checked sausage, so the math works out. But who did this survey? When you find out it was sponsored by the Mid Atlantic Peperoni Foundation, then maybe even those nice clean numbers are suspect. Perhaps they had a bit of bias in who was surveyed.
When you see a nice clean percentage given to you, the first thing you should ask is what are the underlying numbers. If those aren’t given, then right away it’s a suspect number. If you see the underlying numbers, then ask if those seem valid? If it was a survey, were there enough respondents, what was the methodology, were values removed or corrected. The last big piece is who created the number or commissioned the study. Who created it might reveal a bias that wasn’t apparent.
Is every percentage you see untrustworthy? Well, no. However, you shouldn’t just blindly accept a number without seeing what’s behind it. There might be more lurking in there than you thought.
You’ve just seen the perfect post, the absolute best news article that aligns precisely with what you believe. The share button is calling out to you. You must give this to everyone you know. But stop. That’s right, just stop. I know, that button is so easy to click, but it won’t go anywhere, you’ve got some time to think.
Why is that article you want to share so badly just so perfect? If you had a check list in your head of things you believe are true about a certain subject, would it check all of them? If it did and the article gave you nothing to question there might be a reason. The article may want you to share it. OK, really, the writer of the article wants you to share, the article itself isn’t conscious and aware (hopefully). Many articles, posts, blogs, news items and such can get more money for more likes, shares, upvotes, etc. People are more likely to share things they like and agree with. And since you likely have friends with similar views and attitudes, that little article can just get all kinds of likes and shares.
So, you ask, what’s the problem? Can I share it yet, please? That article might not have been entirely truthful, or at all truthful in fact. The author might have written whatever was necessary just to get the piece shared. For instance, let’s say you like pizza, wish it was a truly healthy, nutritious meal to eat every day and then you come across a report from a very professional science type of place that says they did a study and found that people who ate pizza every day had no greater incidence of health problems than those who didn’t. Plus, it says those who ate thin crust with peperoni were actually healthier than the control group – I mean, wow, this confirms everything I’ve ever wanted. I better tell the world!
But was the article real, or are you about to share something either partially or totally fabricated? Who did the study – is it someone you can look up? Who paid for it (was it “Big Pizza”)? Were the results independently verified? How many people were in the study? (we’ll talk about study sizes later, but for now just think small numbers = bad) If some or all of these questions are hard to answer, you might be looking at an article that just wants you to share it, with little to no truth or facts to be found. It might have been perfectly crafted to tick all the right boxes just so it could propagate itself. But now that you know, you paused, and maybe won’t hit share. You might even feel like washing your hands.
We’re surrounded by news and media and all manner of things wanting to get our attention. The people crafting these things are looking for any way to get into our conscious minds. But telling the real from the fake is getting harder and harder. Luckily, we can learn, adapt and apply some thinking to help us out and not let these things spread so far. Don’t worry, it doesn’t involve researching citations or learning statistics!
First tip – if you hear something that perfectly aligns with your own thinking, assume that was intended and ask yourself why. Someone may just be playing on your own likes to, well, get some likes.