This doesn't surprise me, but it does have worrying ethical implications. What if there were a machine to make everyone moral? Would it be immoral to use it? Or would be immoral not to?
Edit with expansion of scenario, so it is clearer what I'm getting at: The situation I envision is a machine that can be programmed to produce some set of "morality" as it were. Who gets to decide what the moral system is, and who it is applied to? Certainly we wouldn't want a corporation or an unelected body to do it. But we also think the machine would be useful in reducing senseless tragedy like murder.
It's a good answer, not just clever. The issue he's addressing, clarified, is, "What if we had a source of infallible (moral) knowledge?"
Then, "ask that source, not me, you, or any other fallible human" is not just clever but correct.
However, infallible sources of knowledge badly violate everything we know about epistemology. So don't expect to actually find one. What interests me more is this issue:
If I had a magnet that made people act a certain way, and I thought that certain way was moral/good, then should I use it on people? The answer then is no, in the event that I disagree with someone about what's good, I shouldn't force them with magnets; it could be me who is mistaken not them. I should instead persuade them to use the magnet voluntarily.
What's good about a discussion with attempted persuasion is it could go either way. I could end up persuaded, or him. This is different than using the magnet which can't go either way, so if it's a mistake there's no possibility of it being corrected.
But since SlyShy started the thread, and jjs replied to him, I think I'll go with SlyShy's expansion of what he meant. Feel free to choose for yourself what SlyShy was talking about, but let's not confuse things by bringing jjs into it, hm? ;)
Edit: aaaand I missed that the poster you were replying to was also SlyShy, changing the point of your comment. Nevermind me. Let's move along. Nothing to see.
Won't work. You have to choose the machine's moral decision outcome before you use it, which means that you're making the decision while still "immoral" (for the lack of a better word).
The question "What if there were a machine to make everyone moral?" implicitly assumed it would make people moral, whatever that is, as a premise, rather than moral according to someone's judgment. This reading may be debatable, but I think that's how he took it.
Sounds like you are assuming that "moral" is an exact concept. Practically speaking, there seems to be lots of cases where opinions on what is moral differ: Abortion, porn, finance...
What would worry me with a machine that could tweak everyone to the same 'standard' moral is who gets to decide what that standard is: Having my moral controlled by the porn industry, Goldman Sachs or the religious right all seems equally bad.
No, actually, that's part of what makes this question interesting to me. The situation I envision is a machine that can be programmed to produce some set of "morality" as it were. Who gets to decide what the moral system is, and who it is applied to? Certainly we wouldn't want a corporation or an unelected body to do it. But we also think the machine would be useful in reducing senseless tragedy like murder.
i think one's capacity for self delusion/rationalization vastly outweighs whatever one's moral compass is telling them to do.
before you went applying this to folks' brains, you could poll convicted criminals to determine what sort of morality they're operating off of, and see if it even differs statistically from the population as a whole. i'd be surprised if it did. perhaps someone has already studied this? any ideas?
you could poll convicted criminals to determine what sort of morality they're operating off of, and see if it even differs statistically from the population as a whole.
There have been studies of this nature. I recall seeing a report about such interviews with criminals quite a long time ago, on the United States television program Sixty Minutes. Convicted criminals generally use considerably different moral reasoning from typical members of the public without criminal records. One example: a prison inmate was asked how he felt about taking other people's possessions. He replied, "I think of it as MY thing, not his thing."
Who needs a machine when drugs are so popular? Ritalin to make kids behave, Antidepressants to make people more lively, Amobarbital to make people honest, etc. Morals are a fickle thing and I don't see how any meaningful conclusion can be made from asking the question. "How moral is an atomic bomb?" is just as meaningless a question to ask.
This isn't about changing how likely someone is behave morally, this is about changing the criteria they use for making moral decisions. Specifically, they're knocking out the ability to use theory of mind to guess a person's intentions.
in general is that it is a treatment that can be evaluated in placebo-controlled studies, because it is possible to put patients into convincing sham treatments.
Transcranial magnetic stimulation needs a LOT more research, but if treatment protocols can be refined through that research, it holds promise for improving outcomes in some patients with mood disorders and perhaps other difficulties related to abnormal brain function.
Is that actually the case? I'm of the impression that TMS systems 1) are audibly noisy when in use and 2) evoke activity in the scalp that can be perceived by the subject ("tingling"). Admittedly, adding sham noise would be fairly easy, but I am not sure what countermeasures are possible if the subject can "feel" when TMS is active.
I read in a medical text that sham treatments are possible, but I didn't see there details of what must be done to make the sham indistinguishable from the studied treatment. I don't recall seeing anything about feelings in the scalp of patients when I read that source.
"In both experiments, the researchers found that when the right TPJ was disrupted, subjects were more likely to judge failed attempts to harm as morally permissible."
This statement is meaningless without knowing how much more likely the attempts were and what the error range was for the results.
I have a copy of the paper if anyone is interested. My email is in my profile if you want to read it.
The short story is that there's no good answer to the question of likeliness to make a moral decision based on theory of mind, because they scored the results on a 7-point scale of permissible vs. forbidden.
But, their error margins are very good. Here is an exert from the combined analysis of the two experiments: "TMS site specifically affected judgments of attempted harms: TMS to the RTPJ vs. the control site resulted in participants’ judging attempted harms as more permissible [independent samples t test based on the item analysis: t(87) = 3.6, P = 0.001]."
I always thought that maybe the earth's magnetic fields can affect cultures in different areas of the world, leading to different morals really.
I wonder how much this can affect thinking processes and learning as well? For instance why do cultures on the other side of the world all write seemingly backwards or from the other side? Russian seems like it is upside down and backwards to English. Is it just relative cultures or some forces pushing things to feel natural differently depending on where you are in various fields.
Whoa there, this isn't the kind of magnet you find on your fridge door. TMS uses pulses (not static fields) from a 2 tesla magnet. For comparison, the Earth's magnetic field is 31 µT and a fridge magnet is 5 mT.
But, the fact that it's a pulse is the more important than the field strength: the goal is to produce a current in the brain via Faraday's law. In order to do that, you need a magnetic flux, and therefore a rapid change in the field strength. Sitting in a magnetic field, no matter how strong, is not going to induce a current.
tl;dr: Any theories about the Earth's magnetic field affecting moral judgement are bull. Ditto for everyday magnets.
(Correction: I just read the paper. They used a 3T field in this study, not 2T.)
There are other species that are influenced by or rely upon the earth's magnetic field. I'm not a biologist and haven't heard of any evidence that humans are magnetically sensitive in some way, but knowing that the trait has evolved in multiple species suggests it could be found in some form in some humans.
This makes me think that if you speak nonsense you have a random chance of getting something brilliant out. My brother is like that, it's damn annoying most of that time.
umm... Am I the only one that sees something wrong with this? We'll mess with your brain's ability to function and then give you some kind of test to see if indeed your brain is malfunctioning in some way. This is just retarded. I'd go as far as to say the region of the brain that the magnet is pointed at has nothing to do with morality. All they are doing is hindering the brain to do what it usually does.
Are you completely against medical research? The researchers are deactivating a specific area of a person's brain (temporarily, and without surgery) to observe its effects on behavior and judgment. This research, in turn, might contribute to eventually alleviating the devastating behavioral changes some stroke victims may experience.
You know I'm getting really tired of people misrepresenting what I said by interjecting a question that had nothing to do with what I was saying. My point was the brain is more interconnected than people admit and of course if you mess with some parts of it you are going to get impaired functioning. How is this research? Take a really complicated biological system and deactivate some part and observe the cascade. I expect more from people with a Ph.D. getting millions of dollars in funding. You don't need a theory to do this kind of research. All you need are some fancy toys.
By your philosophy, we should abandon research on the brain because its complexity is overwhelming.
The PhD's that you are undermining are not intimidated by its complexity, and are trying to better understand functioning within the brain. Bottom line. In my opinion, that is respectable and worth the effort.
The better question is: In your opinion, how could research on the brain be improved? How should it be performed? Do you believe research on the brain should be encouraged?
What philosophy? You read 3 of my sentences and extrapolate a philosophy? Intimidated Ph.D's? You still haven't addressed the fact that a powerful magnet is being pointed at an extremely complicated, interconnected, biological system in order to hamper its functions. Ya, I understand that on some tests people are going to perform worse because a powerful magnet is messing with their brain. Is that surprising? No. I would even expect it. Any layman would.
I believe in clearly delineated research objectives backed by solid scientific theory and not some toy project that messes with electrical currents in the brain jazzed up by moral and ethical philosophy.
You assert that this research is just "a toy project that messes with electrical currents in the brain jazzed up by moral and ethical philosophy".
Where is your evidence that it isn't "clearly delineated research objectives backed by solid scientific theory"?
Surely, occams razor suggests to me that people actively researching in the field aren't complete morons, and have some idea of what they are doing relative to someone who (assumingly) has no experience in this field. What evidence do you have for your conclusions?
The research itself isn't wrong to me but I do a danger in reporting these results as if they mean something. The results are interesting and could lead to some serious breakthroughs but we barely understand what this is actually doing inside the brain or what the long term affects could be. It's just dangerous to start making wild assumptions about how morality works and to start applying "patent" medicines on people.
In a few hundred years I'm sure we as a race look back horrified at the way we handled medical science.
But the key is that the brain is impaired in very specific ways. If you asked the person to do any task that was not a moral judgement and did not use theory of mind, they would be unaffected.
This is how the brain works: manipulating certain parts (e.g. by injury or, as in this case, stimulation) of the brain impairs certain capabilities while leaving the remainder intact. If you put this in another part of the brain the person might not be able to move their right arm, or might hallucinate faces, or their personality might change. The incredible thing is how specific the changes are.
It might be worth noting that those are still hypotheses at this point, and a whole range of things don't appear to be localized. In particular, showing that disabling a certain part of the brain changes an outcome doesn't show that anything was localized in that part of the brain, only that it was necessary to the outcome (but not necessarily sufficient).
In addition, there's often a sleight-of-hand, where the neuroscientist takes some term for a high-level concept (like "morality" or "intelligence") and uses a specific operational definition of it that seems rather suspiciously tailored to make their experiment work, as opposed to choosing a formalization that's already in the philosophy literature. Sometimes the operationalization is even chosen after the experiment, i.e. the researchers find some interesting feature in their data, and try to find a way to map it onto concepts like "intelligence" or "morality" or "motivation" after the fact. (To be fair, that doesn't seem to be the case here; and sometimes when it is the case, it's the fault of the university's PR office.)
You're right, and it's a pity when good research is spun to make it sound further along than it really is. It's even worse when the scientists do it themselves. Thankfully, that doesn't seem to be the case here; their definitions are specific and the hypothesis builds directly from prior results (one author's PhD thesis).
I get what you're saying about specific parts acting in specific ways. Neuroscientists have been working of the modular theory of the mind for a very long time now. I'm not saying it's the wrong approach but morality is a really high level construct and to say it is localized to a very specific region just seems absurd.
The modular theory of brain function has given way to a more nuanced understanding. Some functions are very localized, while others are quite variable between individuals. Functional localization depends on many factors, such as the wiring of afferent and efferent neural tracts, gene expression, and plasticity. For some functions, this strictly constrains the location, for others it doesn't.
But, the point is that it doesn't matter if it seems absurd to you that something "high-level" like morality is localized. The evidence says it is.
I disagree. All they have shown is that some higher level reasoning which factors in moral decisions is hampered when some part of the brain is not allowed to function properly. If you see evidence of localized morality from this then you need to take a closer look at the paper because the evidence does not say that.
Yeah, these kinds of things are usually massively oversold as if we have the ability to precisely control things about the mind by poking at the brain, despite understanding the relationships pretty poorly. Usually it's more like trying to change how your web-browser operates by soldering on your motherboard: sometimes you'll get lucky and actually do something, but usually you'll either break it or cause generalized weirdness.
Breaking some normal function by applying a specific set of conditions to the brain is where a large portion of our medical discoveries stem.
Even if findings from this research do not give us any new practical knowledge in everyday life, coupled with other similar, seemingly trivial findings, could facilitate major breakthroughs in the future.
Edit with expansion of scenario, so it is clearer what I'm getting at: The situation I envision is a machine that can be programmed to produce some set of "morality" as it were. Who gets to decide what the moral system is, and who it is applied to? Certainly we wouldn't want a corporation or an unelected body to do it. But we also think the machine would be useful in reducing senseless tragedy like murder.