Powered by RND

The Minefield

ABC listen
The Minefield
Latest episode

Available Episodes

5 of 255
  • What are we doing when we let someone ‘save face’?
    Whether it is in geopolitics or in social and personal relationships, the overweening desire to “save face” can have manifestly unjust and outright damaging consequences.Those who continue to languish under Iran’s oppressive regime take little comfort in Ayatollah Ali Khamenei being afforded the opportunity to shore up his public standing following the US missile strikes on its nuclear facilities. And Hannah Arendt correctly observed at the heart of the ‘Pentagon Papers’ a willingness on the part of the US government to lie to the American people about the status of the war in Vietnam, and thus to prolong an unwinnable and inhumane war, in order to protect “the reputation of the United States and its President”.When saving face is paramount to all other considerations, others invariably pay the price in order for the untrammeled supremacy of the ego to persist.But “ego” does not quite grasp the social complexity bound up with the concept of “face” — which suggests something closer to “honour” or a kind of thick social reputation, standing or prestige that is conferred by others, the loss of which is no mere bruised ego but a threat to one’s social existence.While this concept of “face” has partly been appropriated from Chinese culture, it nonetheless has roots in the ancient of honour/shame cultures of the Mediterranean and Asia Minor, and, as Kwame Anthony Appiah points out, finds expression fully as much in Western Europe and West Africa as it does in East Asia.Thus Immanuel Kant will warn about the moral dangers of “defamation” and of the intentional dissemination of scandalous information which, even if true, “detracts from another’s honour” and “diminishes respect for humanity as such … making misanthropy or contempt the prevalent cast of mind”. He concludes:“It is, therefore, a duty of virtue not to take malicious pleasure in exposing the faults of others so that one will be thought of as good as, or at least not worse than, others, but rather throw the veil of philanthropy [Menchenliebe] over their faults, not merely by softening our judgements but also by keeping our judgements to ourselves; for examples of respect that we give other can arouse their striving to deserve it.”Kant recognises that frequently the desire to humiliate another is not about their reproof, but about our own relative aggrandisement.Does this suggest that giving someone the ability to “save face”, even when they are found to be in the wrong, can function as both a rejection of the zero-sum logic that often prevails in honour/shame cultures (in which there is only so much social prestige to go around) and a constructive way of keeping them within a moral community?
    --------  
    54:42
  • The threat that AI poses to human life — with Karen Hao
    There is something undeniably disorienting about the way AI features in public and political discussions.On some days, it is portrayed in utopian, almost messianic terms — as the essential technological innovation that will at once turbo-charge productivity and discover the cure to cancer, that will solve climate change and place the vast stores of human knowledge at the fingertips of every human being. Such are the future benefits that every dollar spent, every resource used, will have been worth it. From this vantage, artificial general intelligence (AGI) is the end, the ‘telos’, the ultimate goal, of humanity’s millennia-long relationship with technology. We will have invented our own saviour.On other days, AI is described as representing a different kind of “end” — an existential threat to human life, a technological creation that, like Frankenstein’s monster, will inevitably lay waste to its creator. The fear is straightforward enough: should humanity invent an entity whose capabilities surpass our own and whose modes of “reasoning” are unconstrained by moral norms or sentiments — call it “superintelligence” — what assurances would we have that that entity would continue to subordinate its own goals to humankind’s benefit? After all, do we know what it will “what”, or whether the existence of human beings would finally pose an impediment to its pursuits?Ever since powerful generative AI tools were made available to the public not even three years ago, chatbots have displayed troubling and hard-to-predict tendencies. They have deceived and manipulated human users, hallucinated information, spread disinformation and engaged in a range of decidedly misanthropic “behaviours”. Given the unpredictability of these more modest algorithms — which do not even approximate the much-vaunted capabilities of AGI — who’s to say how a superintelligence might behave?It’s hardly surprising, then, that the chorus of doomsayers has grown increasingly insistent over the last six months. In April, a group of AI researchers released a hypothetical scenario (called “AI 2027”) which anticipates a geopolitical “arms race” in pursuit of AGI and the emergence of a powerful AI agent that operates largely outside of human control by the end of 2027. In the same vein, later this month two pioneering researchers in the field of AI — Eliezer Yudkowsy and Nate Soares — are releasing their book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI.For all this, there is a disconcerting irony that shouldn’t be overlooked. Warnings about the existential risk posed by AI have accompanied every stage of its development — and those warnings have been articulated by the leaders in the field of AI research themselves.This suggests that warnings of an extinction event due to the advent of AGI are, perversely, being used both to spruik the godlike potential of these companies’ product and to justify the need for gargantuan amounts of money and resources to ensure “we” get there before “our enemies” do. Which is to say, existential risk is serving to underwrite a cult of AI inevitabalism, thus legitimating the heedless pursuit of AGI itself.Could we say, perhaps, that the very prospect of some extinction event, of some future where humanity is subservient to superintelligent overlords, is acting as a kind of decoy, a distraction from the very real ways that human beings, communities and the natural world are being exploited in the service of the goal of being the first to create artificial general intelligence?Guest: Karen Hao is the author of Empire of AI: Inside the Reckless Race for Total Domination.
    --------  
    54:36
  • Are there inherent limits on what should be said in public debate?
    In the middle of August, the Bendigo Writers Festival found itself at the centre of a firestorm after over fifty participants decided to withdraw — some claiming they were required to engage in a form of “self-censorship”, and others withdrawing in solidarity.Reports have it that, two days before the festival was due to open, a “code of conduct” was sent to those taking part in the one of the four La Trobe Presents panels, “urging compliance with the principles espoused in [the university]’s Anti-Racism Plan, including the definitions of antisemitism and Islamophobia in the Plan”. The code also asked participants to practice “respectful engagement” and “[a]void language or topics that could be considered inflammatory, divisive, or disrespectful”.For many of those due to take part in the writers’ festival, this code of conduct amounted to a demand for self-censorship over what they hold to be a “genocide” taking place in Gaza, and would prevent them from criticising the actions of the State of Israel, “Zionism” as an ideology and, by extension, “Zionists”.This is just the latest of a series of controversies surrounding Australian writers’ festivals — some of which pre-date the massacre of Israeli civilians on 7 October 2023 and the onset of Israel’s devastating military incursion into Gaza, but which have now intensified and been rendered even more intractable by those events.The conflict in Gaza has placed severe strain not only on the relationships between Australian citizens and communities, but also on our civic spaces and modes of communication: from protests on streets and demonstrations on university campuses, to social media posts and opinion pieces. Given that writers’ festivals intersect with each of these social spheres, it is unsurprising that they should prove so susceptible to the fault lines that run through multicultural democracies.Leaving the wisdom or effectiveness of “codes of conduct” aside, it is worth considering whether there are constraints inherent to public debate in a democracy — which is to say, forms of self-limitation and fundamental commitments that ensure the cacophony of conflicting opinions does not descend into a zero-sum contest.
    --------  
    54:36
  • If AI causes widespread job losses, is a Universal Basic Income the solution?
    This week the federal government’s much-anticipated, and just as hyped, Economic Reform Roundtable has gotten underway. Central to the agenda is how to boost national productivity — which is, roughly speaking, a way of measuring the resources needed both to produce certain goods and to be able to afford to buy certain goods.Put simply: greater efficiency leads to greater affordability and higher living standards. When the same amount of time, labour, investment and raw materials (‘inputs’) need to be expended in order to produce an even greater number of goods and services (‘outputs’), the inputs become more valuable even as the outputs become more affordable, leading to lower working hours and relatively higher standards of living.By contrast, anything that impedes efficiency reduces productivity. Unsurprisingly, then, the need to reduce regulation emerged as a central theme in the lead-up to the productivity roundtable — whether that means reforming environmental laws that slow down the housing approval process or reducing constraints on the development and deployment of artificial intelligence.However you cut it, AI is central to our current national conversation about productivity, efficiency and standards of living. And yet, even as AI represents a key to “unlocking productivity”, it also presents an imminent threat to employment itself. Modelling by Goldman Sachs found that, while AI could drive a 7 per cent boost in global GDP by 2030, this would likely come at the expense of 300 million full-time jobs worldwide.In other words, AI is the latest, and most severe, expression of what John Maynard Keynes termed, a century ago in “Economic Possibilities for our Grandchildren” (1930), “technological unemployment” — by which he meant “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can kind new uses of labour”. AI is technology that will produce entire areas of economic activity where human labour is either wholly redundant or greatly reduced, leading to a paradoxical situation where the economy is thriving and unemployment is high.It’s perhaps not surprising that the possibility of a Universal Basic Income (UBI) is being mooted — including by the pioneers, purveyors and prophets of AI themselves — as a necessary remedy to the radical disruption of humanity’s relationship to work that is likely to transpire between now and 2030. What are the merits of such a proposal? Could this function as a radical alternative to our current system of conditional welfare, relying as it does on moralisation of work itself?
    --------  
    1:03:59
  • Should childcare be offered by for-profit providers?
    In March, an ABC Four Corners investigation detailed widespread instances of abuse, injury and neglect in childcare centres across the country. Just a few months later, in a climate of already heightened public awareness and media scrutiny, a series of deeply disturbing allegations came to light of child sex abuse in childcare centres in Victoria, New South Wales and Queensland.The nature and extent of these instances of neglect and abuse, as well as the fact they involved the most vulnerable among us, suggested a systemic problem in Australia’s $20 billion childcare sector — something that tougher regulation or a national register of childcare workers or improved child safety training or even CCTV cameras will not fully address.Put simply: the concern isn’t simply that a few ‘bad actors’ managed to slip through the regulatory cracks, but that something more thoroughgoing or pervasive is undermining the quality of the care and education being provided to young children. Interestingly, both the Education Minister, Jason Clare, and the Minister for Early Childhood Education, Jess Walsh, have implicated the profit motive itself as compromising the care of some providers. Walsh singled out “some repeat offenders who continue to put profit ahead of child safety”, and Clare has acknowledged that “overwhelmingly higher levels” of quality are found among the not-for-profit providers.The federal government has announced a series of measures that, it hopes, will restore the trust of parents and the public in Australia’s childcare system — two-thirds of which is comprised of for-profit companies that have benefitted greatly from the subsidies provided to parents by the government. One of these measures is the ability to strip unsafe early education and care providers of their eligibility for subsidised care.But it is government subsidies themselves that have fuelled demand in the first place, precipitating a rapid influx of stock-market listed companies hoping to reap their own share of the windfall. It’s a familiar story that has played out since the late-1970s: rather than running vital utilities or social services themselves, government delegates the provision of vital goods and services to “the market” into which it intervenes through funding or regulation. Michael Maron has termed this the advent of the “regulatory state”.But are there some social goods — which is to say, goods that are integral to the possibility of human flourishing — that should not be exposed to the perverse incentives afforded the market? As Andrew Hudson, CEO of the Centre for Policy Development, has pointed out: “For too long, early childhood education has operated as a private market — leaving governments with limited tools to manage quality, access, or safety across the system. That’s what needs to change.”Unless there is an overriding commitment to the wellbeing and flourishing of the children on the part of the organisation — as the animating principle or ‘telos’ of the organisation itself — what reason is there not to cut corners, to limit staff pay, to reduce overhead, to maximise efficiency, to do the bare minimum in order to approach compliance?When the wellbeing of children is made subordinate to the goal of profit, it is the children themselves who are worse off.You can read Luara Ferracioli and Stephanie Collins reflect on whether early childhood care and education are compatible with the profit motive on ABC Religion & Ethics.
    --------  
    58:00

More Society & Culture podcasts

About The Minefield

In a world marked by wicked social problems, The Minefield helps you negotiate the ethical dilemmas, contradictory claims and unacknowledged complicities of modern life.
Podcast website

Listen to The Minefield, No Filter and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Minefield: Podcasts in Family

  • Podcast Law Report
    Law Report
    Government, News, Politics
Social
v7.23.8 | © 2007-2025 radio.de GmbH
Generated: 9/16/2025 - 3:17:38 AM