I didn't downvote, but I have an inkling. I'll share what some of these people may be thinking about your comment. I'll be hyper-critical there because you specifically asked about why someone might have downvoted, so don't take it personally.
"Reading about rewards in AI context makes me wonder, why current AI only focuses on the network topology aspect of natural intelligence."
Well, it doesn't. Not at all. It looks as if you conflate one very specific AI techique (deep neural networks) with all of AI. Because that one technique is currently insanely popular and has been widely mainstream reported and hyped that makes it look as if your education in that field did not go beyond reading a couple of mainstream media articles. Also, there's a whole huge subfield in AI which is actually defined by it's focus on learning from rewards, which you seem not to be aware of.
"What about the neurotransmitters like dopamine or serotonine? Could they (or more specific: their effects) be key to an AI that feels more “natural“?"
Not applicable as written, too broad and unspecific. "Not even wrong." Looks like someone with superficial knowledge is just throwing a few sticks into the blue in the hope that someone might be aroused to a reply which competently tries to make actual sense out of it.
If someone only has familiarity with AI from reading passing news stories, it is reasonable to think they would have such a perception of the field. I think your diagnosis of the downvoters might be correct, but "not even wrong" seems a bit extreme. AFAIK somebody out there is probably trying to do a research paper on simulated neurotransmitters as we speak. More interesting would be if anyone had links to existing research in that area. It may not be high-efficiency or high-performance technology, but as a learning tool for exploring theoretical models I think it could be very beneficial.
The topic reminds me of a book I read (and was fascinated by at the time), The Muse in the Machine by David Gelernter. It seems he has moved away from AI research and into political punditry, however, and recently made headlines denying that the moon landing ever happened. Nevertheless, that book contained a lot of exposition around the idea of creating AI with simulated emotion. This is simultaneously in line with the idea of simulating neurotransmitter function, but also has parallels with more general mathematical techniques that are in widespread use. Thus, the book (and perhaps the idea itself) are a bit dated, but I don't think it makes them less relevant.
IIRC there was at least some work done in the 80's and 90's about neural networks where neurons were located in a 2- or 3-D space and could emit "chemicals" that would "diffuse" through the simulated environment and influence neurons' behaviour depending on local concentration. Nothing new here, but maybe the idea is worth another look. Wouldn't be the first time an old idea turns out to be a really good one decades after it's first inception. It might be that the ground is now more fertile for this than three decades ago.
Still, please note that the critiqued comment did not even go as far as to even suggest an idea about the mechanisms involved beyond (implicitly, not even explicitly) "somewhat like in the brain". There's just not enough meat there to even do a proper critique, and on this basis I pulled the "Not even wrong"-card, which is of course at bit harsh, but that was the whole point about trying to put myself into the position of a downvoter.
> it is reasonable to think they would have such a perception of the field.
It really isn't; what's reasonable is to always assume whatever you thought of in 10 seconds, has been deeply researched regardless of whether you're aware of it or not. What's reasonable is to assume researchers are vast leagues ahead of whatever shallow thoughts you have on a subject since you "know" you have a shallow understanding. The reasonable person simply asks for links to read deeper about a subject, they don't criticize something they have a shallow understanding of and ask why no one is doing X.
Not criticizing you dude, just commenting on why there may have been down-votes; people don't like it when you imply even unintentionally that researchers are stupid and aren't looking at the right things. I have no expertise in this area so I can't point you at any relevant research, but you can bet if there's something you have a question about, someone out there has a PhD in that topic.
"Reading about rewards in AI context makes me wonder, why current AI only focuses on the network topology aspect of natural intelligence."
Well, it doesn't. Not at all. It looks as if you conflate one very specific AI techique (deep neural networks) with all of AI. Because that one technique is currently insanely popular and has been widely mainstream reported and hyped that makes it look as if your education in that field did not go beyond reading a couple of mainstream media articles. Also, there's a whole huge subfield in AI which is actually defined by it's focus on learning from rewards, which you seem not to be aware of.
"What about the neurotransmitters like dopamine or serotonine? Could they (or more specific: their effects) be key to an AI that feels more “natural“?"
Not applicable as written, too broad and unspecific. "Not even wrong." Looks like someone with superficial knowledge is just throwing a few sticks into the blue in the hope that someone might be aroused to a reply which competently tries to make actual sense out of it.