Brotherly Consensus

A slightly paraphrased dialogue from Facebook.

Gay Albino Beer Nerd:
Scarcely anything I learned from higher education was as important as a gradual realization that dawned on me over years of discussions with scientists, doctors, engineers, historians, and other scholars: the so-called experts in a field really are experts. Yes, the consensus of the experts really is the best available opinion on a matter. No, I can’t figure out their biases and avoid them to come up with a better opinion, because even if the experts are biased, as an individual I’m much more biased.

Hotplate Vents:
In Socrates’ day, all the experts condemned him. In Jesus’ day, all the experts condemned Him. In Thomas Aquinas’ day, all the experts condemned him. In Galileo’s day all the experts condemned him.

The consensus of the majority (even of supposed experts) is a very different thing from the truth!

Gay Albino Beer Nerd:
Interesting that all four cases you mention are the error of the religious consensus. Also the accounts about Socrates are fictional, so they’re not real evidence. Same with the Gospels, IMO. And Galileo really didn’t have sufficient evidence for his claims. Aquinas might be a genuine case, except of course that he chose to convince the experts and bring them onto his side rather than ignore them and go his own way. Sometimes an individual has evidence s/he thinks should convince the experts. The way to distinguish whether that’s accurate or a flight of fancy is to try to convince the experts.

The expert consensus (not majority) tracks truth with greater fidelity than individual opinion. That makes it not so very different from truth, for practical purposes.

There followed a digression between Nine-Bondsman Jamboree and Gay Albino Beer Nerd about the consensus or lack thereof regarding Biblical criticism that led me to hedge with the “IMO” in the above comment.

Nine-Bondsman Jamboree:
What if a man with laissez faire leanings asked me to inform him on the pros and cons of capitalism or socialism and I cited him Marx and Keynes to read? Shouldn’t I also cite him a Von Mises? … Let’s say I’m asking them about quantitative easing. Would you think I should ask Niall Ferguson or Paul Krugman about that? … You aren’t suggesting above that in the absence of evidence to resolve an ongoing dispute experts’ personal views don’t influence their conclusions on a subject, are you?

Gay Albino Beer Nerd:
Hotplate interpreted “consensus” as “majority” and you appear to be putting emphasis on “individual expert” rather than “expert consensus”. I’m saying that a particular expert’s personal views are virtually irrelevant except insofar as they shape the expert consensus.

Nine-Bondsman Jamboree:
You’ve made some mistakes, but they’re redacted above. No fair making yourself look smarter. But now, in many words and with difficult-to-replicate formatting, let me ask in general: Why do you think consensus is so useful, when to me it seems too limited for use in most matters?

Gay Albino Beer Nerd:
There’s a mathematical proof that can be paraphrased simply and that I think can help explain how my thoughts on this are ordered. It says that two rational thinkers with the same starting beliefs and the same evidence will always exactly agree on their conclusions. It’s useful mainly because it tells us the reasons why people don’t agree. People who disagree are either thinking irrationally, or they have different starting beliefs, or they have different evidence.

Humans are mostly pretty decent at rational thinking, but there are exceptions. Humans, because of the way our brains are built, are systematically flawed in well-studied and measurable ways (a fascinating list). Frustratingly, even knowing all about humans’ systematic errors can’t stop us from committing those errors ourselves. The crucial point is that we can and often do notice others committing them. When one person works alone on a project, he will unavoidably make a mistake, and he won’t notice. When a team works together on a project, each member of the team will unavoidably make mistakes, but they will almost always make different mistakes from each other, and so they can correct each others’ mistakes. Communities of people who hold each other’s work accountable will therefore reliably do better than people who go it alone. For us who aren’t working on the same problem, it is much more prudent to defer to the community than to isolated individuals — even if that individual is myself.

There are communities of people who spend big chunks of their lives learning about a topic, explaining their reasons to each other, and holding each others’ thinking accountable. They tend to be really good about rooting out irrational thinking and about making clear what beliefs they’re starting from. That process of learned people holding each others’ thinking accountable is what we refer to as an expert consensus. We don’t value the consensus merely because they happen to mostly agree. This isn’t about democracy or relativism. We value the conclusions of an expert consensus because the way they are reached means they are more likely to be true. They are reached by gradually, fitfully clarifying assumptions and removing errors. An expert consensus is like a ship with many compasses, each slightly biased in its own way: if we combine their readings, we get a more reliable pointer to true north than if we selected any one of them.

Sometimes disagreement about starting beliefs persists. This isn’t actually a problem for the method of expert consensus as long as people clearly say what their starting beliefs are. It’s kind of like math, where you are allowed to start with whatever axioms you like, and then you can go on to prove things based on those axioms. Or you could also say it’s like writing a good story, where you can have any kind of characters you like, but then the story should unfold in a way consistent with how those characters would act.

You requested examples regarding Biblical interpretation and starting points. When deciding how to interpret the Bible, Lutherans start (epistemically) from Luther’s five solas. Baptists and most Evangelicals start with the solas and also add doctrines like soul sufficiency. Anabaptists start with the solas and also add doctrines on peace and simplicity. Calvinists start with the TULIP doctrines. Pentecostals start from doctrines about the divine origin of ecstatic experience. Catholic and Eastern Orthodox start from the historic beliefs of the Christian people. Oriental Orthodox start from the historic beliefs of their local Christian populaces. Secular historians start from ancient documents and archeological remains. (All the groups teach Biblical criticism, as it’s a subject matter, not a school of thought.) The groups reach different conclusions from each other because they start with different assumptions, but within each group its members can reach agreement because they share starting assumptions.

When the topic is the real world, rather than math or stories, different assumptions are not equally sensible. Some assumptions are clearly much bigger assumptions or much more specific assumptions than others. The real world is risky and it’s prudent to make the smallest, least specific assumptions in order to avoid unnecessary risks.

Often a community of scholars don’t have exactly the same starting assumptions, but they do at least have similar assumptions. They can still work with that, because if they correctly update their beliefs based on the same evidence, then they end up with conclusions that are as similar or more similar than their starting beliefs. Of course that means they might need a lot of evidence to reach a close-enough agreement. And sometimes there isn’t enough evidence available. Then all the experts can do is search for more evidence and hold off judgment until they find it. The primary virtue of expert consensus is that it is more likely to be true than any individual’s opinions, but it is still helpful even when it hasn’t settled on a definitive answer, because it constrains the range of remaining possible options and gives us a sense of their likelihoods. The truth is probably somewhere on the spectrum in between the various experts’ hypotheses, especially the points on that spectrum near where the experts come closest to consensus. The debate is very rarely about a logical binary, “X is the case” versus “X is not the case”, but if it is, then the spectrum of options collapses to only its two endpoints and the truth is certainly a choice between them.

One example where I defer to the expert consensus is on climate science. I don’t want to spend years learning the physical principles, experimental methods, main results, points of dispute, and so on. But it’s clear that 98% of climate scientists agree with the consensus statement that they’re 90% confident global warming is real and man-made. The leftover climate skeptics are always sharing scientific tidbits online with the public, which to me is very sketchy; if they were being scientists rather than political operatives, they’d publish arguments and experiments to change the expert consensus rather than to mobilize ignorant voters against restrictive industrial regulations.

One example where I don’t defer to the expert consensus is on the correct interpretation of the Quran. I don’t accept the starting beliefs of the groups arguing over its interpretation, so all I can say is that the correct interpretation is probably somewhere in the range of the different answers its scholars propose.

Nine-Bondsman Jamboree:
Very well written, Nerd. Consensus is clearly valuable and must take a central role in any polemical conversation. I have not meant to imply in anything that I have previously written that I have found it somehow unimportant. I actually had a dream after our conversation that helped me process the idea of it. In my dream I saw a whole bunch of small paintings up close, but as I moved farther away from them, I saw that there were thousands, and that together they formed a composite image. This central image was the notion of consensus, although those areas not a part of the image were necessary to define it.

Of the common errors in human thinking you’ll have no arguments from me. I have witnessed and been victim of this in my own experience many a time. Haven’t we all? To err is human. You’ve described something akin to Murphy’s law in our thinking process.

You note that sometimes disagreement about starting points persist. This statement, combined in my thinking with the biblical critics you referenced above, makes me wonder why you didn’t also list critics of another starting point as well.

But this all just leads me to a new curiosity. How does one switch a starting point? What is that process like? I’ve been reading William James’ essays on Belief lately. Switching starting points is a pulsating question in my mind, and you’re the one to answer it from your own experience. You see, I have always hitherto viewed a change in worldview to be outside of one’s own capacity; that although one might attempt to change his view, certain facts which he finds irresistible would remain stubbornly intact in the truth processing portion of his mind. I’ve thought of such worldview switches as when a magician suddenly combines those rings that were not combined before; but what’s behind that slight of hand? This may be thought a rather Christian metaphor at heart, since a magician combines these rings (representing differing worldviews), it could be thought a metaphor of a conversion experience of which God is the author. So, what are your thoughts on how one switches a starting point in terms of worldview?

Gay Albino Beer Nerd:
Earlier I alluded to the mathematics of belief. It gives us idealized models, not instructions for how we should normally think, although of course it’s interesting to compare real thought to idealized thought. Surprisingly, though, there’s no idealized model for choosing or switching our starting assumptions.

Nobody know for sure the right way to come up with starting assumptions, but we all know there are wrong ways. For example, it’s a mistake to make assumptions you don’t have to. If somebody asks you your name, you know the answer, so you don’t make assumptions about it. If a passenger asks how fast you’re driving, you find out by checking your dash, and the idea of assuming an answer is kind of funny. If a student asks you whether extraterrestrial animal life exists, the best response is to admit you don’t know, since you can’t find out yet and there’s no pressing reason to assume either way. The principle I find here is that assumptions are risks and we shouldn’t take risks without commensurate reward. For another example, it’s a mistake to assume that the present situation is totally different than what came before. If you find out something is stealing Bonny Oleander Hens‘ chickens, you should assume it’s that cougar again and not a leprechaun. If you’re designing tests for your students, you should assume they’ll do about as well as usual and not awesomely better or awfully worse. The principle I find here is that assumptions are gaps in our knowledge, not gaps in the world, so we should fill in the gaps with things we know, not things that might be.

A computer can be set up to solve a problem, in general, without being told what the problem is or how to solve it, but only if someone programs starting assumptions into it. Lots of methods of generating initial assumptions were tried before somebody discovered a method that has very nice properties: it converges more quickly to the truth than all other methods except the ones where the programmer knows the answer to the problem and gives the computer hints. Unofficially, you could say it’s the best way to learn when there’s no teacher to ask. Intriguingly, it turns out to be Occam’s Razor, computerized.

Humans don’t think like computers operate, of course. A human equivalent to the method is like this: Write down the shortest complete description of the situation in terms of your experiences and the relations between them; the longer the description is, the less likely the situation is. (William James would like it.) It makes intuitive sense as probability: if each kind of piece is unusual, then the more pieces you collect, the more unusual that exact collection gets. Unlike the computer version, the human equivalent hasn’t been proven to have any special properties, and it’s impractical anyway. Nevertheless, judgments relying on Occam’s razor are often wise and helpful. It’s in this light that I recommend use of Occam’s razor to prudentially choose starting points of belief when they are needed.

Which is never, I suspect. Foundationalism is the idea that our knowledge rests on a few secure beliefs like a tower on a strong foundation. My problem with it is that our brains don’t work like that. Our beliefs are encoded in a neural net. Some beliefs are “central”, with their neurons connecting to many other beliefs, but there are no foundational beliefs upon which others are built, no beliefs which for biological reasons stubbornly resist truth processing. There’s only the beliefs you’re considering right now versus all the connected beliefs that you’re not considering right now. This means that changing starting points or worldviews doesn’t look magical like yanking a foundation out from under a tower and sliding in a new foundation. It looks more like patching a tapestry with a slightly altered image on the patch. The change is gradual, and it’s not always obvious how many patches it takes before the overall picture is something new.

Because of how beliefs link up in a network, a whole cluster of them can be mutually self-reinforcing. You can see it in conversation when someone changes their mind about one small issue, moves on to a connected issue, and later comes back to the first issue with their original perspective as if it hadn’t already been discussed. Of course you’ll never be able to simultaneously address every issue in the cluster, so getting them to really evaluate truth for the issues looks hopeless. And it is hopeless to expect logical to win out in such cases, because they’re accidentally committing a big logical mistake. Whenever you encounter evidence in favor of a possible belief, you should increase your confidence in that belief incrementally, and whenever you encounter evidence against a possible belief, you should decrease your confidence in that belief incrementally. When beliefs are held with degrees of confidence like this, you no longer get the self-reinforcing effect, and instead clusters of beliefs can change with the evidence in a logical and healthy way.

A personal example: I formerly identified as a libertarian. After I’d been convinced of many individual libertarian positions, they lent support to the NAP, the central belief of libertarianism. From then on, the NAP was part of my worldview, the lens through which I viewed politics and decided what beliefs to hold on other political issues. Later, my belief in the NAP was weakened by the moderate libertarian author David Friedman, who described cases where it was impossible to apply, and then broken by arguments for a contradictory principle. It changed the central belief but left the connected ones largely intact; I still have mostly libertarian political positions, but now I feel they need justification on an individual basis.

Nine-Bondsman Jamboree:
Your principles make sense as you’ve described them. You have described something of a mental patchwork, or landscape which seems more in keeping with this idiosyncratic world than a categorical style would allow for. However, I find myself doubting that the difference between the mental patchwork view and that of foundationalism is distinct and separable enough to treat them as discordant, or to be able to label one approach ‘how humans think’ vs ‘how they don’t’. I wonder if you have read the essay by William James ‘Great Men and their Environment’. I find a phrase in that pertinent here: “…in the vagueness of these vast propositions we have lost all the concrete facts and links; and in all practical matters the concrete links are the only things of importance.”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s