The World Is Awash In BS

Messages from the Commander about life, the universe and everything.
Post Reply
User avatar
Commander
Space Captain
Space Captain
Posts: 780
Joined: Wed, 30 Mar 16, 01:49 am
Location: On the Starport

#1 The World Is Awash In BS

Post by Commander »

A couple of years ago, the magic team of Penn & Teller did a TV series on cable called Bull *oops you said word #1. Well if you listen to the author of this article, Penn & Teller could have continued the series for years...
This is really the best paragraph I have read so far in 2017:
"The world is awash in bullshit. Politicians are unconstrained by facts. Science is conducted by press release. So-called higher education often rewards bullshit over analytic thought. Startup culture has elevated bullshit to high art. Advertisers wink conspiratorially and invite us to join them in seeing through all the bullshit, then take advantage of our lowered guard to bombard us with second-order bullshit. The majority of administrative activity, whether in private business or the public sphere, often seems to be little more than a sophisticated exercise in the combinatorial reassembly of bullshit."
It’s from The Bull$hit Syllabus, which was created by University of Washington Professors Carl Bergstrom and Jevin West, who are trying to combat The Bull$hit. The syllabus includes questions and standards for data scientists to think about and use.
Basically, what they are trying to say is that with the advent of computers to give us all sorts of different data in such mass quantities, that picking certain data sets out could be a possibility and is a questionable practice.

Think of it this way (for the rocket minded), you design a rocket in RockSim or one of the other design programs and run some simulations to determine the altitude your rocket will attain. You start playing with some of the variables in the program (wind settings, Cd, etc.) until you get a grouping of altitudes you like which make it look like your rocket is going to kick butt in the C altitude contest at the local flyoff. You build the rocket and attend the competition only to have your design so underperform on the actual launch that you are hiding from your local nemesis as he is rolling on the ground laughing and telling everyone that he has attained a better altitude there rolling than your rocket did. What went wrong?

Well, first off, the adjustments you make have to have a basis in fact. You can set the Cd at zero, and sometimes that is a good call for design work, maybe to see how a particular tweak will affect performance. But to believe that with that change you will get real world performance of the same caliber is just wishful thinking.
They believe that with the advent of “Big Data” and tools to deal with it, the amount of BS in the world has really risen too much. It has become too easy for BS to be taken out of context, and to be spread and made to go “viral.” Big Data has given us ginormous datasets to study and manipulate. While we might not be quick to draw conclusions from a smaller data set, we have become very comfortable putting credence to implications and patterns in big data sets. Bergstrom explains:
Before big data became a primary research tool, testing a nonsensical hypothesis with a small dataset wouldn’t necessarily lead you anywhere. But with an enormous dataset, he says, there will always be some kind of pattern.
“It’s much easier for people to accidentally or deliberately dredge pattern out of all the data,” he says. “I think that’s a bit of a new risk.”
So if I continuously run launches with certain beliefs, sooner or later if I run enough tests I will get the answer I am looking for. What can be an even greater egregious error is looking for a particular outcome, then stopping the program once this outcome has been achieved. I tell the program in other words, to continue running till the average altitude attained is over 100 meters. It may take the program a million attempts, but sooner or later it will reach that goal. Does this then make the results more accurate, because a lot more tests have been run? Who knows. We'll make it easier. You have a program that randomly determines a coin flip of heads or tails, then tell that program to continue to run till it has achieved a sixty percent heads results. in the first six flips, it miraculously comes up heads each time and stops the program. Does that mean that really there is a six out of ten chance of flipping heads? We all know it really doesn't. Also there may be an issue with the created program. Who can say for certain?

:arrow: article
Commander
Starport Sagitta
NAR No.97971
Post Reply