On Modeling the Cognition of Non-Human Animals

Harun Ćurak, 2023

Modeling the cognition of non-human animals (NHC) is a challenge for many reasons. Most notably, the cognitive ethologist runs the risk of projecting certain frameworks that are used to make sense of human cognition onto NHC prematurely, or in a manner that presupposes systemic similarities between the two that may not be present in reality. The root cause of the issue might be the fact that this jump is attractive to the modeler in light of their human cognition-related biases . However, the rest of the paper will deal less 1 with the causes of this problem, and more with how the contrast between humans and non-human animals creates explanatory challenges for the ethologist. To that end, I provide 2 specific examples of the differences between what computational models postulate, and what is observable with regards to NHC. In most cases, to be applied to NHC, the models we use will require at least a slight modification; at worst, what is being modeled might be fundamentally different than the model itself.

First, it should be noted that there are indeed many explanatory challenges facing the cognitive ethologist. Although there is a profound similarity between human and nonhuman animal cognition, a certain kind of high-level cognitive behavior seems reserved for humans. Two striking examples are our capacities to use and understand causation and complex natural language. Non-human animals may exhibit behavior that is similar to the aforementioned, but that is limited in some significant way, thereby presenting a conceptual challenge to the cognitive ethologist. What kinds of cognitive systems do non-human animals rely on, if not on those. This is not meant in a way which alleges we need to be conscious of the phenomenon, or to be a non-human in order to model NHC, but is rather a more subversive charge exploring the sort of confirmation bias we may have when attempting to model nonhuman cognition with the models we already have for human cognition in mind. present in models of human cognition? Does their existence, or lack thereof, imply that a drastic reworking is needed in order to model the phenomenon properly?

Starting with causation, it is clear that humans are able to extract analogical relations between non-trivially interwoven representations of events or objects; thereby, we form frameworks of causal mechanisms in the world on the go. Computational models of human cognition, especially classical computationalist ones, uphold 2 this fact by their very nature— causation can be derived from the conceptual interrelation of causing and resulting symbols, whether they represent something being perceived at the moment of cognition or not. That is, we may form notions of causation for things that are not immediately perceivable, for example the causal power of mental states. However, when it comes to NHC, it seems like the notion of causation in general is more difficult to find. For instance, experiments done with Capuchin monkeys seem to support this claim. Clearly, the monkeys were unable to establish a causal relationship between the hole and the inaccessibility of food, or moving their arm and making the food (in)accessible. If computational frameworks of cognition encapsulate causation, and if NHC truly does not contain the concept, our models cannot explain when animals sometimes display behavior indicative of causal understanding.

Next, perhaps most intriguingly, humans’ capacity for verbalized, complex, natural language and sentence formation is not seen across NHC. In fact, the development of the cognitive function of language that occurred during evolution may have skewed the composition. Connectionist models may have a black box over the defining properties of the causal process, but the concept is still present. Insofar as a computational model accounts for these behaviors in humans, it cannot be applied without alteration to model NHC. Proponents of connectionist models, having taken as inspiration the biological medium where human cognition (including language) has been realized, want to argue that neural networks and deep learning can account for this cognitive capacity; the latest NLP and NLG engines certainly do seem promising in that regard. Thus, perhaps it is completely impossible to model NHC using a connectionist approach; the biologically inspired system could implicitly be indivisible from a conception of cognition that is exclusive to humans. In the worst case, it could be that NHC behavior that is similar in humans might only seem complex, but is done without anticipating the outcome. Perhaps it is just a product of natural selection, in which case no human cognition model can model it. Perhaps the development of language has shaped human cognition in such a way that it attempting to project that model onto linguistically simpler species is a futile pursuit.


Bibliography

Visalberghi, Elisabetta & Limongelli, Luca. (1994). Lack of Comprehension of Cause-Effect

Relations in Tool-Using Capuchin Monkeys (Cebus apella). Journal of comparative psychology

(Washington, D.C. : 1983). 108. 15-22. 10.1037/0735-7036.108.1.15.

https://www.researchgate.net/publication/15025602_Lack_of_Comprehension_of_Cause-

Effect_Relations_in_Tool-Using_Capuchin_Monkeys_Cebus_apella


Appendix 1

The Capuchin Experiment - Description

The monkeys were shown food placed inside a see-through tube which had a hole in it. If the

monkey pushed the food into the hole, it became inaccessible; if the monkey pushed it away, it

could eat the food. Out of over 80 trials, one of the four monkeys learned to not push the food in

the hole. However, when the experimenters effectively removed the hole from the experiment,

that monkey kept pushing the food away.