If and when sentient AI becomes reality, will it, too, have the capacity to both anthropomorphize and dehumanize other beings in relation to itself? And if it did, would we view it as more or less human depending on which skill it used to refine its opinion of humanity?
For less humanoid systems, people respond idiosyncratically. For example, some people name their Roomba vacuums and speak of them almost like a pet but others do not. So what aspects of a system might provide those triggers?
From childhood, I have always regarded The New York Times as the world's greatest newspaper, and the leading source of reasonably objective information. Yet, there is something about it that has always bothered me.