Cognitive ecology & the ethics of intelligence
An ecological model of intelligence changes the way we think about ethics, organisations and AI
Dr John (Jono) Sutton is a philosopher at the University of Macquarie that I've had the pleasure of knowing since the early 2000's when we both participated in a kind of intellectual salon looking at overlaps between Cognitive Science and Continental Philosophy, which ended up being the topic of my Masters.
Sutton works on the philosophy of memory, and in particular on the embeddedness of memory in both the environment and our relationships with others. By the time I met him, he was already well known for his work on memory in Descartes, and was exploring how memory is actually distributed between couples who have lived together for decades.
Watching his recent talk at the Danish Institute for Advanced Study inspired some ideas that I think are relevant in the context of responsible business, ethos design and even framing the challenge of ethical AI.
In that talk, Sutton presents and re-casts the "4E" theory of cognitive skills. This theory argues that intelligence (or "enmindedness") is itself:
Embodied - inextricably linked to the bodily capabilities and affordances of the organism
Embedded - inseparable from the structures and relationships it is immersed within
Enactive - dependent upon active interactions with the objects of perception and the environment - ie intelligence can't be achieved through the passive creation of theories or maps about the world (no GOFAI)
and Extended* - that we manage our cognitive resources by offloading tasks onto the environment
The first challenge this poses for us is this:
If this 4E model of cognitive capabilities is the best model we have of what intelligence actually is, then how might it change the way we understand the intelligence of an organisation or the teams that constitute it?
In a recent post of mine on LinkedIn, I pointed out that we probably underestimate the impact of restructuring an organisation on the "economics of trust behaviours", because we effectively change the relative value of different trust postures (eg cheat vs collaborate) by changing the size and relative anonymity of the community in which they are practiced.
But if the 4E model is correct, and intelligence is embedded, then it isn't just trust that is affected. Intelligence itself is potentially compromised by changes in structure. On the face of it, this is obvious, perhaps - after all, we restructure in order to create a more intelligent organisation, or better, to create an organisation that is intelligent in different ways, smarter at doing different things.
Where things get interesting, for me, is when we start to look at how this 4E model suggests a connection between intelligence and ethics. And the phrase of Sutton's that really sparked this for me is the idea of a "cognitive ecology"**.
There are a couple of different germs of ideas here, not really fully fleshed out (which is why I'm publishing them here - keen to hear your thoughts).
If intelligence is the exploitation or use of the cognitive ecology we inhabit, harnessing its power for a specific purpose, then perhaps ethics is the maintenance and repair of our cognitive ecology. On a 4E model of cognition, it isn't just our brains or our habits that are essential to sustaining our intelligence - maintaining a cognitive ecology involves caring for our bodies, our environment, other people and even our history and the history of those who share this ecology. I suspect a lot of the responsibilities we associated with ethics can actually be derived from vulnerabilities in our cognitive ecology.
Humans have perhaps developed similar ethical concerns, because some sorts of ethics are required to sustain our cognitive ecology above a certain threshold of complexity. The need for ethics may even grow as this complexity grows (which may itself explain the recent turn toward business ethics - businesses have become too complex to manage without it). When people in our vicinity behave unethically, they threaten to undermine the richness of, not only our moral, but our intellectual lives. Unethical behaviour is stupid, precisely in this sense of degrading our cognitive ecology. On the other hand, animals that don't or can't reach that threshold of complexity, don't have the same responsibilities. But that doesn't mean we have no responsibilities to them, since they form a part of our cognitive ecology.
It’s unlikely that moral intelligence is that different from purely practical or intellectual intelligence, so perhaps moral intelligence is just as embedded, embodied, enactive and extended. In which case, the field of responsibility I described in previous posts here could be construed as the moral ecology of a team, enabling it to make decisions that are smart and responsible.
If intelligence is enactive, and requires agency and interaction to exist, then moral agency is a necessity for building moral intelligence. Awareness, information, training and data are necessary, perhaps, but insufficient for responsible teaming.
If intelligence is, by definition, embedded in a cognitive ecology, then there is no such thing as Artificial Intelligence until it is deployed into a cognitive ecology, and in fact, one route into the ethics of AI is to focus less on performance or accuracy, and instead more broadly on the impact of an AI system on the cognitive ecology into which it is introduced.
I have two reservations with all this:
Grounding ethics in intelligence seems like it might risk driving us towards intellectualism at best, or at worst, a high-modern bureaucratic catastrophe described by James C Scott in Seeing Like a State. Nonetheless, it's a trope that business will intuitively understand, so it may be a useful rhetorical device. A richer notion of ecology (beyond cognitive ecology) might help.
How do we account for sacrifice in this model? Who's cognitive ecology is it? Is it mine, is it ours? Who is the "we" that "owns" it, and has responsibility for it? At the same time, how do we explain or accommodate the desire to help people who we do not depend upon, whose disability or death would have no impact on our cognitive ecology, at least not in our lifetime?
I’ll leave these unanswered for now.
Until next time...
* Sutton prefers the team "Distributed" to "Extended" because he argues, convincingly, that it's not the case that the things out there, onto which we offload cognitive tasks, are not like minds. On the contrary, it's precisely the fact that they work differently* to our own minds that makes them so valuable to us as part of our cognitive ecology
** I find the term “cognitive ecology” doesn’t actually leave much room for the blending of ethics and intelligence that I have in mind. I would suggest that a “behavioural ecology” might be a more neutral term. Of course, an even more gentle term, that Sutton himself often uses in this context, is … place.
Is the “ethics” that you are seeking something beyond the heuristics of behaviour required to maintain or strive towards an aesthetic?
Are you looking for something that is situationally specific to an organisation and it’s inbuilt endeavour?
If so, is that really something distinct, or more a calibration effort to ensure aesthetic (and associated ethic/heuristic) recognition and acceptance in a group of people?
(I.e. can organisations actually have aesthetics, and therefore ethics, that differ in any way from the people of which they are comprised?)