Can A Human Machine Be Significant?
Response to Ben Watkins
“If there is no God, then man is not significant. He is only a machine. But if man is only a machine, then so is his thinking. And if his thinking is only the product of chemical processes, then so is his conclusion that there is no God.” -Francis Schaeffer
A few days ago I posted the above quote by one of my favorite Christian authors, Francis Schaeffer. Since posting it I’ve received well over two hundred replies, but only one reply so far even scratches the surface of having any substance. That reply came from my friend, Ben Watkins. He said:
Calling humans “machines” does not imply insignificance. Any system capable of reasoning, love, and creativity would be significant regardless of its composition. Why should it matter whether Plato or Shakespeare were made of carbon, silicon, or “soul stuff”? Moreover, Schaeffer conflates the physical realization of thinking with the content of thought. A belief may be realized by neural processes, but the proposition believed is not identical to those processes—just as the truth of a claim isn’t identical to the ink used to write it. Perhaps the thought is machines lack libertarian free will because they are either determined or partly random. But that’s already true of us. We act as we do because of the way we are, and the way we are is itself either determined by prior causes or partly the result of chance.
There are three points Ben makes in this statement. The first is that humans can have significance even if we’re machines. The second point is that physical thinking is not the same thing as the content of our thought. The third point Ben makes is that humans don’t have libertarian free will even if we’re not machines. These are all interesting points and I’ll address them in order, starting with significance.
Significance
Ben’s first contention is that even if humans are machines, we’re still significant. He says:
Calling humans “machines” does not imply insignificance. Any system capable of reasoning, love, and creativity would be significant regardless of its composition. Why should it matter whether Plato or Shakespeare were made of carbon, silicon, or “soul stuff”?
I think that before we discuss my problems with Ben’s view, we should try to get a handle on what it means to be significant. What is “significance?”
Typically when we say something is significant we mean of value, or importance. “Finishing the marathon had great significance,” and “the most significant person in his life is his mother,” are the sort of usages we typically use this word in. Things that are insignificant are things that have little to no value. Trash for example is insignificant. So to be significant is to be valued, to be worth something, to be distinguished.
Ben says that to be significant is based not on what a human is, but on what a human’s abilities are. If you have the ability to love, reason, or create, then you have significance.
The first glaring issue I see with this is that it seems arbitrary. Why do these abilities give significance, rather than other abilities? Why doesn’t hate give us significance, or being violent? Humans have many potential abilities, so why these? Also, are we less significant if we have less of these abilities? Are more intelligent and more creative people more significant than people with less of these abilities? A baby is not intelligent, not creative, and not very loving. Does a baby have no significance? That seems a very strange thing to say.
Another immediate problem here is that abilities are properties. An ability is an add on, like if you add red paint to a wall. Without the paint, the wall is still a wall. If human abilities are significant, then its not the humans that are significant, it’s the properties. This would mean that being a human, in and of itself, wouldn’t be significant on Ben’s view.
But these are just some of the surface level issues I see, let’s now dive down to the heart of the disagreement. Ben seems to believe that a system can still have significance, regardless of where it comes from, and regardless of what it is. As long as it has abilities, it has significance. The problem is that this isn’t how significance works. Significance is always dependent on context. If you change the context, you change the significance. For example, a child’s first step is very significant. When that child becomes a man and walks around, it’s no longer significant. The same person walks, but in one situation it’s significant, and in another it’s mundane. What changed? Was it the ability? No. It was the context.
Or take another example.
You wake up and find “I love you” spelled with cheerios on the kitchen table. Your wife is very shy, and she’s never said this to you. Would this discovery be significant? Absolutely. But what if it turned out the cheerios were arranged that way by an earthquake? The cheerios are still arranged the same way, but has something happened to the significance? Absolutely. There is no significance anymore. Why? Context.
Context matters.
Think about a non playable character (NPC) in a video game. These characters can appear to create, reason, and love. But are they significant? No. Why? Because they’re programmed to be this way. They’re machines. Everything they do is just the result of prior programming, they have no agency. “I love you” from an NPC is meaningless. When NPC’s make an argument, we skip. When an NPC dies, we don’t care. Most of the time we’re mowing them down ourselves. NPCs are disposable and insignificant because they’re machines.
My point here is that what we are, what causes us to exist… that’s what makes us significant. Ben says that Plato and Socraties are still significant even if they’re machines. Is that true? I’ve played video games with Plato and Socrates in them. I wasn’t impressed. Everything they said was programmed, every reaction is programmed. Even if they were exact copies of the great philosophers, we’re not going to cry or care if they die in the game. Why? Because of what they are. I believe this example directly proves Ben is mistaken here. If humans are really just machines, like a robot or an NPC, why would we have value or significance? We wouldn’t.
I think this point was illustrated well in the movie “the Stepford Wives.” In the film, the wives are turned into robots by their husbands. Everyone that watches these movies knows that when the women become machines, they no longer have significance. When they say “I love you” it’s meaningless. When they do the dishes, it’s meaningless. When they become unplugged or broken, just get a new one. Who cares about a bot? We care about the women, not the machines, and we all know there’s a world of difference between the two.
I believe this demonstrates Ben is totally mistaken. If humans are machines, we do lose our significance. There’s clearly some major difference between what we are, and a Stepford Wife. Remove that difference, turn man into a machine, and you remove man’s significance.
Content
Ben next argues:
Moreover, Schaeffer conflates the physical realization of thinking with the content of thought. A belief may be realized by neural processes, but the proposition believed is not identical to those processes—just as the truth of a claim isn’t identical to the ink used to write it.
The problem is that this isn’t Schaeffer’s point. Schaeffer isn’t arguing that if thoughts are identical to chemical processes, that thoughts can’t be true. Schaeffer is making a different point that often gets missed.
Schaeffer is really arguing this: that what a mind is, and where it comes from… matters. Here’s a simple example to help illustrate the point. We all drink water, but would you drink water if it came from a toilet? Of course not. The point is that where something comes from affects what it is, it’s nature, and it can affect it’s reliability. After all, aren’t atheists famous for demanding “source?!” whenever someone makes a claim they disagree with? Why do they do that? Because where the information comes from, and how it was derived… matters.
I’m sure everyone reading this article has flown at some point in their lives, so let me ask you… why did you trust the plane wouldn’t crash? I’ll tell you. It’s because you believe the plane you’re flying on is made by very intelligent and trustworthy men who designed the aircraft so well that it’s very unlikely anything will go wrong. Would you fly on an aircraft that was made by thieves and liars? What if it was made by infants, or monkeys. No, you wouldn’t. Why? Because where the aircraft comes from, what made it… matters. That is Schaeffer’s point.
If our mind is just the product of a mindless chemical process that isn’t aimed at truth, that isn’t intelligent… why would we trust what our mind produced? On Ben’s view, we don’t control the chemicals that produce our thoughts, the chemicals are controlled by physics. Imagine flying on a plane that was made by a mindless process that wasn’t even aimed at flight. Now imagine the plane is flown by a chemical reaction, rather than a person. Trusting in that plane would be insane.
But on Ben’s view this is exactly how the brain is. The brain is the product of a mindless chance process that wasn’t aimed at truth or reason, it’s not aimed at anything. Just as a river doesn’t aim for the ocean, what we have in our skull isn’t aimed at anything. If it happens to think or be rational, that’s a coincidence. Furthermore, not only is our brain just the product of a mindless chance process, what controls it? Chemicals. Everything we think is really just the result of a mindless chemical reaction. Chemicals don’t know about truth or reason, they just react. That’s all you have is chemicals doing stuff, there’s no pilot in charge of this thing, there’s no mind steering it. No, it steers us.
So given all of that, is it reasonable to think our brains will reason correctly and come to true beliefs? No, it’s not reasonable. That is Schaeffer’s point. What something is and where it comes from, affects everything about it. Ben misses this point entirely.
Free Will
Lastly, Ben says:
Perhaps the thought is machines lack libertarian free will because they are either determined or partly random. But that’s already true of us. We act as we do because of the way we are, and the way we are is itself either determined by prior causes or partly the result of chance.
Ben guesses that the point has something to do with free will. That’s not the point. A computer can reason correctly even though it’s programmed and has no free will. So can a calculator. It should be rather obvious that something can be programmed with intelligence and lack agency, just look at AI. The argument isn’t that a machine can’t reason, but if a machine is programmed by something random or stupid… should we expect the machine to reason well? I think not.
But I want to address the point Ben makes. He says that humans simply can’t have libertarian free will because we don’t decide to be humans. We act the way we do because of what we are and we didn’t decide to be what we are, so we can’t have free will no matter what. That appears to be Ben’s argument.
I agree with Ben that humans didn’t decide to be humans. Where I disagree is with his conclusion that something can’t be free if it doesn’t choose to be what it is. Why can’t humans just be creatures that have free will? If humans simply are creatures that have free will, do we not have free will if we don’t choose to be humans? I don’t think that works. We can have the ability to freely choose certain things in our lives, even if we didn’t freely choose to be humans or have this ability in the first place.
Conclusion
I appreciate Ben taking the time to engage with Schaeffer’s argument, but I think his response ultimately misses the heart of the issue.
The disagreement isn’t really about whether machines can think, or whether thoughts have content, or even whether humans have libertarian free will. The real question is much deeper.
What makes human beings significant?
Ben says it’s our abilities… that if something can reason, love, and create, then it has significance. But that just pushes the question back a step. Why do those abilities matter? Why are those the kinds of things that give something value? And why think those abilities have any real importance in a universe that, on his view, is ultimately just matter and energy moving according to blind laws of physics? Furthermore, context matters. We all recognize that an NPC is not significant like a human is, why? Because humans are not machines.
Secondly, Ben was right that the content of a thought is not identical to the physical process that realizes it. I agree. But that’s not the issue. The issue is where those processes come from… are they the result of a great intelligence? Or are they just the result of blind, non-rational causes. If the latter, then we have a problem. Because the belief “naturalism is true” would be produced in exactly the same way as any other belief… not because it’s true, but because our brains just happened to chemically react in that way at some point in the past. And if that’s the case, then the very reasoning used to defend naturalism, and every other belief we hold, is called into question.
Finally, Ben’s attempt to say that humans can’t have free will because we didn’t choose to be humans is a non sequitur. If humans have free will, that could still be the case whether we initially chose it or not.
Hopefully you can now see that what we are and where we come from matters.
It affects whether our thoughts are trustworthy.
It affects whether our values are real and substantial or meaningless and arbitrary.
And it affects whether we are truly significant… or just complicated arrangements of matter that happen to feel like we’re significant and rational.
That’s the question Schaeffer was raising.
And I don’t think Ben has succeeded in answering it.


