Waking up to high‐tech AI with human‐tech PIA

“What is the problem with AI and AGI?”  

“Think of the immense advantages!”  

“Stop scare‐mongering!”

“Imagine all that AI can do, so I don’t have to!”

Waking up!

When I first heard about the launch of ChatGPT – a digital instrument producing artificially generated intelligence (AGI) – I felt an overwhelming sense of foreboding. Beyond the buzz, fizz and excitement surrounding the media hype of Artificial Intelligence (AI) more generally, something in me was screaming for us all to wake up; to stop and think more expansively and deeply about the human and planetary implications! This sensation flooded me again earlier in the year when a colleague shared a handout, excitedly pronouncing that all she had to do was put a question in ChatGPT and out came the information. I felt horrified and noticed the urge in me to run – to disengage from the workshop she was about to deliver. Something seemed off‐kilter, but at the time I could not articulate what. Was I simply being an outdated, doom‐laden maiden? Perhaps, I was tapping into some kind of taciti knowing – ‘knowing I could not (yet) tell’?

In attending to my sense of foreboding about this high‐tech AI revolution, using human‐tech Presence in Action (PIA), eventually, I found myself able to surface and grapple with my concerns.  

In this article, exploring my reactivity to AI, I make explicit how PIA helped me illuminate and respond with greater clarity about what was going on for me. I do this, this way, for two reasons. Firstly, because I feel so concerned about the implications of humanity falling headlong and blindly into AI as if it is an entirely beneficent resource. Secondly, I offer my own experience (using PIA) to help you discover similarities and differences in your own noticing and processing patterns.  

Insights you have about yourself may move you to explore the implications of AI more deeply. I offer signposting (by including references) to anyone wishing to do so. I also share links to learning opportunities, for those of you curious about how PIA might support you in making sense of this topic, yourself, your life and how you engage with all you encounter.  

What is the problem?

Hannah Arendt, who escaped Nazi Germany early on in the Second World War, alluded to the dangers of losing our ability to ‘think what we are doing’. In the shadows of the war and holocaust in which the Nazis capitalised on the science of eugenics, and reflecting on the wider consequences of the industrial age, it was the first use of the atomic bomb that tipped Arendt to conclude that scientific and technological developments pose the greatest risk to humanity.

In 2017, I attended a conference run by the International Society of Systems Science in Vienna. In a plenary on the development of ‘smart cities’, one of the speakers championed the introduction of driverless cars as being the solution to reducing deaths on roads caused by cars driven by people. This seemed a ludicrous solution to me at the time. I posited that, if we reduced the numbers of cars in cities – increasing pedestrianisation, perhaps that too would reduce deaths without all the other collateral costs imposed by more technology? I was exposing that just because we know how to do something does not mean we should do it. Arendt too, was calling for us, individually and collectively, to think together about what we are doing – and not to be seduced by science, nor let it dominate or replace the political, ethical and moral debates required to safeguard humanity and the planet.  

Roll forward nearly 80 years. We find ourselves in the aftermath of a global pandemic and in a worldwide onrush, in which fake news and Artificial Intelligence (AI) are (becoming) ubiquitous. They are swamping our digital realms to such an extent that few of us can discern what is useful information and what is pure fabrication. And now we have AGI – Artificially Generated Intelligence.  

According to Kate Crawford, a leading scholar on AI, there around 6‐8 companies reaping billions from their domination of large‐scale worldwide computation. In contrast 8 billion people on the planet have virtually no power over what is happening in these realms. The global concentration of economic power related to generative AI rests in just four billionaires. With the rapid, exponential uptake of AGI, it is they who are set to gain the most. For example, ChatGPT is the fastest growing app in history, reaching 100 million users in its first two months. The creators expound the potential benefits of AI; yet others, also implicated in its development, are cautioning about its past, current and future dangers. As Crawford asserts, these are not simply related to what is channelled through the interface of our computers and mobile devices but are linked to monumental violations to people and planet.  

Summarising Crawford’s wide‐ranging research on the current and potential consequences of AI, Sue Halpern of the New York Review of Books, comments:  

“Drawing on more than a decade of research Kate Crawford reveals how AI is a technology of extraction: from the minerals drawn from the earth, to the labor pulled from low‐wage information workers, to the data taken from every action and expression. This is an urgent account of what is at stake as technology companies use artificial intelligence to reshape the world.”

I am, however, focusing on the implications arising from the likes of you and me coming to rely on AGI sources. Few of us (in planetary terms) know enough about AI; even fewer of us know how to use it beyond its simplistic application. This puts us at greater risk from those who know how to create it; and how to abuse it and us for their own ends.  

AI and generative AI instruments, like ChatGPT, replicate and simulate, but they cannot create true novelty. Only living human beings can do this, when we leverage all the faculties and capacities available to us through our material, being‐doing‐learning bodies. Crawford recognises this and urges us to engage critically. She calls for greater democratic participation. Like me, she is echoing Arendt’s message: there is an urgent need for more of us to engage in ethical, moral, social and political discourse to change what we are doing, for the sake of humanity and planet.  

Cultivating our capacity to ‘think what we are doing’

Something is profoundly amiss. Those of us who want to rush into using AI/AGI are in danger of giving up on our own capacity to think for ourselves. When we stop thinking for ourselves – and stop thinking together about what we are doing – we are effectively relinquishing our capacities to search, discover, engage in discourse, discern and learn together; thereby compromising the trustworthiness of our own sensemaking, i.e. the process(ing) that gives rise to new knowing within and between us, that ultimately enformsiii  the actions we take.

If we remain in a state of disregard and disengagement; if we do not take radical responsibility for what we do (and do not do) in these matters, we will likely make ourselves victim to the worst that AI can reap. This is not an improbability. It has happened in the past. The seeds of citizen disengagement and abdication of responsibility enabled the Nazis to take over in Germany in the Second World War: 

“Like the mood in August 1914, that of 1933 represented the actual power base of the coming Fuhrer state. There was a very wide sense of release and liberation from democracy. What is a democracy to do when the majority of the population no longer wants it? There was  desire for something genuinely new: popular rule without parties, a popular leader figure.”   

Journalist and author, Sebastian Haffner (1987)

I find Haffner’s comments all the more chilling because we are witnessing this unfolding again, in nations across the world – threatening the efficacy, fabric and foundations of democracies. The manipulation of minds and peoples in the pre‐digital era that led to both world wars is horrifying enough. How much worse can it become, with the grip of AI and AGI affecting and, arguably, infecting human institutions? 

Getting into the guts of AGI

In resorting to, and relying on AGI, we are, in effect, assuming that what is produced is understood by the ‘AI generator’. We are also assuming that it is accurate, reliable, trustworthy and true. Campolo and Crawford alert us to the notion of “enchanted determinism”iv – referring to how quickly people fall into believing that what is written is truth; that what is produced is goodv . Crawford helps dispel the myths. The tech‐bound ‘generator’ of AI does not actually understand language nor the meanings that language conveys. It is a mega‐machine or machines, coded with algorithms that are designed to scour large scale datasets (i.e. everything available on the internet, uncensored). Gone is the imperative for datasets to be ‘cleaned’ of vitriol, abusive material etc. by discerning human beings. The algorithms simply detect repeating patterns, make classifications and predictions based on those classifications; and re‐produce what, to an uncritical reader, may seem plausible.  

Much is written and available through the internet and in social media streams, producing oceans of opinions shared as if they are facts. People’s opinions, biases/prejudices, emotive diatribes, photos of breakfast, selfies on holiday, pornography, etc.; the list of what can be trawled in cyberspace is seemingly limitless. That something is written, said or shared is a fact, but the actual content may not be. Algorithms that scrape the internet do not differentiate between factual and fabricated content. This means, that we, as prosumers (producers and consumers) of ‘data’ streams, can find ourselves implicated in perpetuating misinformation, i.e. when we replicate/share something because it aligns with our views, even though our views may not have any foundationvi in fact.  

Additionally, we may fall foul of disinformation. This is when the ‘author(s)’ share something, intentionally capitalising on a ‘lie’ or ‘part lie’. For example, former BBC journalist Gabriel Gatehouse, in his podcast series, “The Coming Storm” (2022)vii, exposes disinformation trails across the world, linked to QAnon that far pre‐date – yet relate to – the Capitol Hill riot. He illuminates how these trails were being exponentially replicated so as to manipulate and amplify the extreme actions of others, to serve those seeking to access and/or subvert political and economic power. As I understand it, this seems to be the calculated exploitation of the phenomenon of enchanted determinism.  

When using AGI, the quantity of data it draws upon is vast. Yet, because the algorithms are designed to seek similarities with no reference to attribution, cross‐checking and verification, we lose access to the range of data and data sources, reducing the diversity of expertise and perspectives; and also the reliability of sources. In so doing, we risk relinquishing our critical sensemaking faculties. Unwittingly, we make ourselves servant to whatever AGI delivers to us. This is not some far‐fetched, futuristic, fatalistic fantasy. This is already happening in social media realms. AGI amplifies bias because it is engaged in pattern‐recognition not comprehension; it replicates and reinforces similarities and is incapable of doing what only human beings can do – use the entirety of our material, living, being‐ doing‐sensing bodies to notice, illuminate, create and engage in conscience‐infused, coherent sensemaking.  

Notwithstanding the greatness of our potential, we human beings are also fallible and gullible. If something aligns with our own assumptions/biases, we are (non‐consciously) less likely to think critically. In other words, we may fall into repeating, reactive patterns and might simply accept something without fully engaging all our faculties and capacities in contemplating what is being expressed. In such situations, the autopoieticviii principle essential for learning – in which a person ‘can only know and incorporate what they create within themselves’ix – will be, at best, compromised, and at worst, irreparably fractured. To avert this erosion of our ability to learn, adapt and sustain our own lives, we need to be(come) consciously and passionately proactive. This means enhancing our natural sensemaking abilities: recovering how to use ‘all of our being’ in attending and responding to all we encounter. We are situated selves – always in and of our relational, wider world and Kosmologicalx realms.  

We are more than AGI

We are incredibly sensitive, potentially sophisticated, sentient, sensemaking living beings. I say ‘potentially’, because without proactively expanding our acuity (i.e. our capacity to notice more of that which occurs between, beyond and within our singular beings), we reduce ourselves to the modus operandi of AGI – which, of course, was created by human beings!  

What am I conveying here? Kahnemanxi argues that we have two modes of thinking – fast and slow. Non‐conscious, reactive, fast thinking generates our own bias‐perpetuating meaning‐making, which leads us to repeatedly react in the same way. It draws upon what is familiar. When dealing with complex, never‐experienced‐before situations, we need to call upon ‘slow’ thinking capacities, that challenge and circumvent our repeating reactivity. Essentially, there seems to be little difference between AGI and the fast‐thinking that each and every one of us is doing every moment of our existence; except that AGI draws upon a vast data pool far beyond the capacities of a singular self. It magnifies the fallacy that if many, many people think/say/do the same thing then it must be right, true, valid. This is very far from ‘intelligence’xii. It would be a fact if a billion people expressed or did the same thing, but it does not mean that ‘that thing’ was right, valid, reliable, trustworthy, coherent nor intelligent!  

To sum up, machine‐likexiii thinking (the term that AGI engineers tend to use) relies on pattern‐ detection, classification and amplification of similarities. This corralling and perpetuation of similarity drowns out our ability to tap into human creativity, critical (by this, I mean ‘conscience‐infused’) thinking and coherent sensemaking, all of which rely on recognising distinctions, admittingxiv differences and noticing the interrelating dynamics between them. AGI does a really good job of fast thinking but cannot do the equivalent of slow thinking.

What are we to do? How can we equip ourselves to responsibly use AGI for what it is best at, i.e. by mitigating the risks of our own, ill‐considered reliance on it; whilst resourcing ourselves to augment our creative, slow thinking capacities, i.e. to do better, what only we living human beings can do?

Activating regenerative response-ability

IF we do not wake up to its potential deleterious consequences, AGI is set to proliferate our dependency on it. We need to take personal and collective responsibility for how we engage with it and what we do with it.  

Returning to the questions I posed in the first section… I know myself well enough to recognise that when I notice I am having such intense experiences, some tacit knowing has already begun to channel itself through my actions; even though I do not (yet) have the words to convey what is be(coming). I recognise that I need to access more data to enable new knowing to formulate. Below, I illuminate something of my processing patterns that came into play when the launch of ChatGPT entered my consciousness. Perhaps you may recognise something similar about such patterns in your own processing dynamics?

I noticed my being‐doing body shift into a state of high acuity. I nowxv describe this as an enhanced, natural (abductivexvi) sensemaking dynamicxvii. Tapping into ‘all of my being’ in concert, I attuned to noticing and drawing in and on, sometimes seemingly random data from diverse contexts, sources, places and people. Amidst all I have done these past months, the following relational and contextual encounters stand out as particularly relevant, in moving me to put onto the page, all that you are now reading:

  • In January 2023, I attended my doctoral graduation, following seven years undertaking a Systems Science PhD. I felt surprised that this public symbolic acknowledgment signalled such a significant internal shift of confidence within me. I found myself deeply trusting the efficacy of the emergent approach I adopted in my research; which has distilled into and helped me hone the living‐learning praxis of Presence in Action.

  • Around that time, I started digesting Arendt’s book, “The Human Condition” xviii, having heard about her whilst attending a four‐week physical theatre summer school in Berlin in 2022. In May 2023, I returned to Berlin, deepening my encounter with integral performance‐making. My tapping into embodied ways of knowing ‘becoming’, was illuminating that so‐called ‘intelligence’ cannot exist outwith a material living‐ being‐digesting body. Books cannot understand language any more than computers can!

  • June 2023 found me returning to the continent of my birth (Africa), participating in, and presenting at, the International Society for Systems Science (ISSS) conference. Amidst visceral, deeply personal ‘re‐ memberings’ about my childhood years on African soil, I was exposed to an array of on‐point and seemingly tangential experiences. I encountered rich seams of relational grappling and arid exchanges that illuminated much about generative and degenerative ways of co‐creating and conveying knowing. What burst alive in me, was recognising the truly embodied, deeply personal way in which knowing arises in and between each of us, when we allow the natural inflow, confluencing and communion of streams of ‘data’ flowing through our situated selves.

  • Come July, I moderated a plenary at the “Trust and Integrity in Democracy” conference convened by Initiatives of Change. Entitled “Facts, Feelings, Fictions”, our plenary covered the topics of Fake News and AI and their impact on, and implications for, cyber security, politics, humanity and the planet. It was clear to Breon Wells (The Daniel Initiative), Adam Nosal (working at the intersection between consciousness, technology and digital democracy) and myself (the creator of Presence in Action), that the problematic ramifications of digital advances need human responses not technological solutions because the genesis of the problems are human‐made NOT technological! This looped me back to Arendt’s prescient insights.

  • In August, at the 40th Edinburgh International Book Festival, I attended Kate Crawford’s session in which she set out her knowledge and deep concerns about generative AI (woven into the fabric of this article). She offered context, detailed facts and expert knowledge that has helped me enrich and articulate what my disquiet has been trying to access.

  • And finally, at a reception for the Authors’ Licensing and Collecting Society, a set of AI principles informing Intellectual Property (IP) policy development were distributed. These principles illuminate the extractive threat posed by AI, in particular, the way in which AI engages in the un‐acknowledged pillaging of IP produced by creatives, whose livelihoods depend on being paid fairly for the use and consumption of their original contributions.  

I list these as chronological events, but my sensemaking of the range of data I was accumulating, was far from linear. An un‐orchestrated, unpredictable, nonlinear processing dynamic has been playing out within me. Only in hindsight, can I trace the import, relevance and impact on me, of these key encounters with others. In other words, I found myself in seemingly un‐related terrain, engaging in and exploring what was arising in and coming through me, in a systemic (i.e. a non‐systematic) manner. At unplanned moments, I noticed myself attempting to articulate verbally and visually, what I was beginning to grasp, literally and metaphorically. Through these months, I have felt many paradoxical emotions ranging from confusion, revelation, despair, outrage, excitement, wonder, delight. This array of feelings were my primary guides alerting me to commune with all that was calling for my attention.  

If this seems a strange account of ‘coming to know (how)’, I invite you to recall or reflect on how babies and toddlers come to know how to do something like walking. They notice. They feel. They flail their limbs around as they find their way in and through space. They vocalise first, through screams, crying and gurgling long before words form and flow in and out of their mouths. They learn to move by wriggling, writhing, crawling and falling. They engage in whole body interior, yet situated, sensemaking, unmediated and uncontrolled by others, yet influenced by all they encounter. They have a go, because they are movedxix to do so; and their numerous attempts eventually culminate in complex, motor co‐ordination that belies the immense self‐organising orchestration of breath, blood, limbs, bones, organs, brain, senses, sensemaking, guts, skin, nutrients, fluids, etc. They walk!  

Your own toddler body learned to walk long before you consciously knew ‘what walking was’ or ‘that you could’ do it; long before you were able to talk about doing it, let alone talk about how to do it.  

Personal knowing is generated within and through our being‐doing bodies; we are our own knowing generator and we are always processing, whether or not we or anyone else wants us to. But the efficacy, reliability and coherence of our processing relies on giving ourselves free reign to notice what goes on around and within us, and to experiment with what to do with it. Through our continuing improvisation, we develop the metaphorical and literal mental, emotional and corporeal muscle to cope with and handle new knowing as it is arising.

Now more than ever, I believe we are being called to enhance this natural processing dynamic as an antidote to the momentum building around so‐called machine‐learning. We are being called to take radical responsibility for what we each do. What unfolds next for humanity depends on all of us.

Embracing radical responsibility

Our natural sensemaking dynamic (that moves us to action, when called upon to do so) depends fundamentally on our capacity to notice. Bringing awareness to our noticing is what sets us apart from other living beings (insofar as we know). And it is our awareness, that is in urgent need of awakening and enhancing. We need to notice that we are noticing; to notice what we are noticing; to notice the scope and focus of our attention, to help us notice what ‘fields of awareness’ we tend to miss; to notice that others notice things we do not, which tells us something about the limitations of our personal noticing; and to notice that when we admit xxwhat others notice, our personal and collective noticing capacities expand. When talking about the ability to notice, I am referring to the capacity of ‘acuity’. In using this word, I am invoking the use of all our facultiesxxi, rather than privileging one over all others, like seeing, hearing, touching, etc.  

I have found that a person’s acuity can be dramatically enhanced through the recursive utilisation of the tri‐fold scaffoldingxxii comprising the nonlinear praxis of Presence in Action (PIA). The P6 Constellation (Figure 1) is the core representation supporting those practising PIA. Specific data in each of the outer ‘portals’ interrelate with each other, generating and replicating patterns of thinking, being, knowing and doing. PIA disrupts fast‐thinking, repeating reactivity and supports responsive, generative sensemaking. It is beyond the scope of this article to explain how to practise PIA. However, do follow the qr codes at the end of the article to find out more.

This uniquely human, living‐learning praxis supports me and other PIA practitioners in our daily lives. It is intuitive, leveraging our natural sensemaking human capacities. In embracing it, we become better able to notice what is calling for our attention on the margins, beyond the reach of our dominant, linear, fast‐thinking processing. Practising PIA supports us to slow down, long enough, to attend to what may seem irrelevant or insignificant within and beyond our situated selves. It helps us attune to our being‐feeling‐ doing‐knowing bodies; and to attend to what we are noticing – without needing to ‘know’ why – trusting that our interior sensemaking is onto something of import that is yet to become knowingly accessible to us.  

Increasingly, we become keenly alert to the ways in which our use of language illuminates what is going in each of us, and how we may consciously (with manipulative intent) or non‐consciously (through lack of awareness) provoke, evoke and invoke particular reactions in ourselves and others. The praxis of PIA has the potential to awaken us to attempts at manipulation; and to support us in detecting these patterns in all our exchanges, everywhere we are… including in our encounters with AGI.

I recognise now, that it is in picking up and replicating our language patterns, that AGI can be so damaging and dangerous at scale. With this capacity for global scale manipulation, I wonder what safeguards are in place for people and planet? Crawford suggests there are none – or rather, that what is there, is wholly inadequate. She is calling for worldwide legislative instruments, such as those put in place after the use of nuclear bombs  by the United States at Hiroshima and Nagasaki. In the absence of that, what can we do? Without individual and collective commitments by those invested in AI and AGI, to safeguard their own trustworthiness, the rest of us are unwittingly putting ourselves at significant risk… unless we take personal and collective action to inform and equip ourselves.

In reflecting on this, I realise something else. Just as high‐tech AI/AGI can be used to manipulate, so can human‐tech PIA. Those who ‘know how’, can, with mal‐intent, control those who do not know how. Yet, I see one clear distinction. The design, coding and mechanics of AI/AGI is far beyond the reach of ordinary folk. Also, those who have their hands on the levers of power, are exercising that power over others, ultimately in self‐serving, wealth‐accumulating, humanity‐harming, planet‐ damaging ways. As unthinking users of AI/AGI, we are implicated in this impending planetary catastrophe.  

In contrast, PIA is accessible and available to those wanting to break free of their own self‐sabotaging, non‐conscious thinking and behavioural patterns. In embracing radical responsibility for themselves, they knowingly enter into a learning process in which they will be changed, not by what others do, by what they themselves do. This is key. AGI is about changing others; PIA is about changing oneself.  

This brings me back to the intentions of individuals who make AI/AGI, and make it possible (e.g. stakeholders, investors, coders, producers, suppliers, legislators, etc.); and to those who practice PIA. Members of PIA Collective CICxxiii embrace a common commitment to each safeguard our own trustworthiness. The contrast is stark. As PIA practitioners we open ourselves up to process, host, witness and learn together as a Community in Practice. This supports each of us to engage in our own lives – wherever we are, whatever we do – with increasing transparency, clarity, integrity, coherence and joy. We are inevitably, imperfectly human, yet we choose to meet ourselves and each other with increasing courage, humility and conviction that we can resource ourselves to advocate, not only on our own behalf, but for the well being of that which sustains us all.

Presence in Action is for the bold‐hearted and determined. It is not for the unwilling nor those searching for a ‘tool’ or a quick fix for individual gain. It is for those recognising that for (our) planet and humanity to persist, we need to take radical responsibility for our own actions. We need to recognise and relinquish our own reactive patterns so we may act with response‐ability. Only then may we find ourselves able to engage compassionately with others in mutual, contextual learningxxiv so that future fruits of our own genius, such as AI/AGI, do not sow the final seeds of our own destruction.

Links to further learning

REFERENCES TO READ

Andersson, P. (2015) Scaffolding of task complexity awareness and its impact on actions and learning. ALAR Journal, 21(1).

Andersson, P. (2018) Making Room for Complexity in Group Collaborations: The Roles of Scaffolding and Facilitation. Doctor of Philosophy Doctoral thesis. University of Gothenburg, 9‐Nov‐2018. Available online: http://hdl.handle.net/2077/57854 [Accessed.

Andersson, P., Ringnér, H. & Inglis, J. (2017) Constructive Scaffolding or a Procrustean Bed? Exploring the Influence of a Facilitated, Structured Group Process in a Climate Action Group. Systemic Practice and Action Research, 1‐19.

Arendt, H. (2018) The Human Condition, Second edition. Chicago & London: University of Chicago Press.

Bateson, N. (2016) Small Arcs of Larger Circles ‐ Framing through other patterns.Triarchy Press.

Campolo, A. & Crawford, K. (2020) Enchanted determinism: Power without responsibility in artificial intelligence. Engaging Science, Technology, and Society.

Crawford, K. (2021) The atlas of AI: Power, politics, and the planetary costs of artificial intelligence.Yale University Press.

Freed, M. (2009) A multiperspectival conceptual model of transformative meaning making. 3391149 Ph.D. Saybrook Graduate School and Research Center, 2009.

Freeman, W. J. (2007) A biological theory of brain function and its relevance to psychoanalysis: a brief review of the historical emergence of brain theory, in Piers, C., Muller, J. P. & Brent, J. (eds), Self‐organising complexity in psychological systems. London: Jason Aronson.

Gardiner, L. J. N. (2022a) Attending Responding Becoming: A living~learning inquiry in a naturally inclusional playspace. PhD University of Hull.

Gardiner, L. J. N. (2022b) Chapter‐Five‐as‐Appendix, adjunct to  "Attending Responding Becoming: A living~learning inquiry in a naturally inclusional playspace". PhD University of Hull.

Gatehouse, G. (2022) The Coming Storm, [Podcast]. 2 June 2022. Available online: https://www.bbc.co.uk/programmes/m001324r/episodes/player?page=2 [Accessed 02/09/2023].

Jordan, T. (2014) Deliberative methods for complex issues: A typology of functions that may need scaffolding. Group Facilitation, 13.

Jordan, T. (2020) Scaffolding Developmental Transformation Among Immigrants in Order to Facilitate Self‐Directed Integration: Practices and Theories of Change. Integral Review, 16(2), 5‐47.

Kahneman, D. (2011) Thinking, fast and slow. Translated from English by. New York: Farrar, Straus and Giroux.

Maturana, H. R. & Varela, F. J. (1980) Autopoeisis and Cognition: The Realisation of the Living, 42. Dordrecht; Boston; London: D. Reidel Publishing Company.

Meadows, D. H. & Wright, D. (2009) Thinking in systems: a primer. London: Earthscan.

Polanyi, M. (1966) The Tacit Dimension. New York, NY: Doubleday.

Raworth, K. (2012) A safe and just space for humanity: can we live within the doughnut. Oxfam Policy and Practice: Climate Change and Resilience, 8(1), 1‐26.

Raworth, K. (2017) Doughnut economics: seven ways to think like a 21st‐century economist.Chelsea Green Publishing.

Sheets‐Johnstone, M. (1981) Thinking in movement. The Journal of Aesthetics and Art Criticism, 39(4), 399‐407

Sheets‐Johnstone, M. (1999a) Emotion and movement. A beginning empirical‐phenomenological analysis of their relationship. Journal of Consciousness Studies, 6(11‐12), 259‐277.

Sheets‐Johnstone, M. (1999b) The primacy of movement. Amsterdam/Philadelphia: John Benjamins Pub.

Sheets‐Johnstone, M. (2009) The Corporeal Turn: An interdisciplinary reader. Exeter: Imprint Academic.

Sheets‐Johnstone, M. (2018) If the Body Is Part of Our Discourse, Why Not Let It Speak? Five Critical Perspectives, in Depraz, N. & J., S. A. (eds), Surprise: An Emotion? Contributions To Phenomenology. Switzerland: Springer Nature, 83‐95.

Zhang, B. H. & Ahmed, S. A. M. (2020) Systems Thinking—Ludwig Von Bertalanffy, Peter Senge, and Donella Meadows, in Akpan, B. & Kennedy, T. J. (eds), Science Education in Theory and Practice: An Introductory Guide to Learning Theory. Cham: Springer International Publishing, 419‐436.

i (Polanyi, 1966) – Polanyi refers to personal knowledge, suggesting ‘we can know more than we can tell’.  

ii (Crawford, 2021) – a seminal book that was five years in the researching and writing.

iii ‘Enform’ means ‘to form or fashion’. Even though this term is somewhat archaic, I use it because it more accurately conveys the sense of something being shaped anew; whereas ‘to inform’ is more closely allied to ‘instruct, train’.  

iv (Campolo & Crawford, 2020)

v (Raworth, 2012, 2017) – there is another deeply damaging assumption related to AI, namely that (perpetual) growth is possible and beneficial for all. Raworth develops the ideas surfaced in the 1970’s amongst systems thinkers, including Donella Meadows (Meadows & Wright, 2009; Zhang & Ahmed, 2020), that challenge these dominant assumptions driving economic policies and practices.  

vi In contrast Wikimedia Foundation ensures that all information posted on its online repositories are accurately referenced and moderated by multiple people over time to ensure their reliability, accuracy, validity and trustworthiness.

vii (Gatehouse, 2022)

viii (Maturana & Varela, 1980) – the theory of autopoiesis offers an explanation of how living things are self‐generating in that they draw in what they need to sustain themselves. This is as true of biological necessity in terms of the food, water and air we need to literally carry on existing; and it applies too, to our capacity to learn and adapt to context.  

ix (Freeman, 2007) – the paraphrasing from Freeman refers to autopoiesis applied to learning. We cannot learn if we do not actually do not metaphorically ingest, digest and integrate, i.e. think about, what we read/encounter.  

x (Gardiner, 2022a, 2022b) – you will notice I spell Kosmological with a ‘k’ not ‘c’. The original term Kosmos referred to more than what was assumed to be material ‘bits’ in the universe. There was an acknowledgement of that which was not, and may never be, known, beyond material/tangible realms. It encompasses something of the ‘Divine’, whatever that means to each of us.

xi (Kahneman, 2011) – clarifies the difference between fast‐thinking (reactivity) from slow‐thinking, the latter of which enables new knowing, insight, solutions and creativity to emerge.

xii (Crawford, 2021) – Kate explains in depth, how AI is neither ‘artificial’ nor ‘intelligent’.

xiii (Campolo & Crawford, 2020; Crawford, 2021) – the authors point out how marketeers have appropriated the emotive, misleading usage of ‘Artificial Intelligence’; whereas the AI engineers tend to refer to ‘machine‐learning’ which is a more accurate description.

xiv (Gardiner, 2022a: p. xvii; p. 242) – Admit: “Sometimes I use this word for one of its meanings: acknowledge/confess; allow/let in, accept, accept as possible/valid. When I embolden the word, I am invoking all these meanings at once” (2022a: p. xvii; p. 242). xv Since undertaking my doctoral research.

xvi (Gardiner, 2022a: p. xvi; 2022b: Ch 5.5.12) – “Abduction is situated, naturally inclusional, emergent, nonlinear processing…when enhanced by a metalogically coherent, self‐centering praxis such as Presence in Action… has the potential to generate radical insights, artefacts, and responses that are real and efficacious… and may reliably be transferable to others.” (2022a: p. xvi). Please refer to the Thesis and Chapter‐Five‐as‐Appendix for an in‐depth exploration of how I came to associate abduction with the self‐centering praxis of Presence in Action.

xvii In my doctoral research – see endnote (xv) – this came to be known as the praxis of Presence in Action, which is scaffolded by a representation called the P6 Constellation, the Acuity Practice and deep praxis behaviours enformed by the philosophical framing of Natural Inclusionality and a complexity thinking paradigm.

xviii (Arendt, 2018) – in urging us to ‘think what we are doing’, she is calling for political, moral, ethical discourse and to not allow science and its associated technologies to be elevated above all in decision‐making, as if it is offering absolute truths and certainties.

xix (Sheets‐Johnstone, 1981, 1999a, 1999b, 2009, 2018) – I am alluding to ‘primal animation’. xx ‘Admit’ has several meanings, all of which I invoke, when I embolden the text – confess as true, accept, accept as valid, let in.

xxi I include faculties we know about, as well as those we may not realise we have… i.e. beyond current human awareness.

xxii (Andersson, 2015, 2018; AnderssonRingnér & Inglis, 2017; Freed, 2009; Jordan, 2014, 2020) – scaffolding refers to the ways in which learning can be supported. The first uses of this term in educational contexts related to situations where knowledge or knowing was held by some. Scaffolding supported others to gain what was known by these others. The scaffolding in the praxis of PIA, comprises: two representations that support interior and exterior acuity (respectively, the P6 Constellation and the Symmathesic Agency Model); a supremely simple Acuity Practice which consists of the recursive application of the question: “What are you / am I noticing?”; and 7 ‘deep praxis’ behaviours (called the Symmathesic Agency Behaviours, ‘SABs’) that distil into action, the philosophical framing of PIA.   

xxiii CIC – PIA Collective is a Community Interest Company. It is a membership body for PIA practitioners embracing PIA as a personal praxis to support themselves to navigate all the personal, relational and wider world challenges that life brings to them. It trades on the basis of a Sufficiency Principle (enough = enough), in which we ask clients to pay a little more than they might usually do, to support bursaries so that practitioners are enabled to work with those who cannot afford to pay.  

xxiv (Gardiner, 2022a: p. xxii; 2022b: Ch 5.5.5.2) I offer a way to account for the way in which collective agency can arise through the enhanced self‐centering (PIA) capacities of individuals: “Symmathesic Agency is the meta‐conscious capacity to engage in mutual, contextual learning through self‐centering interaction in place in space in time.” I draw on Nora Bateson’s (2016: p. 169) neologism ‘symmathesy’: “an entity formed over time by contextual mutual learning through interaction.”

Previous
Previous

Transition

Next
Next

Falling down the rabbit hole