Artificial intelligence and public policy


Overviews and definitions

Last month the Commonwealth released a consultation paper Safe and Responsible AI in Australia. It provides a definition of AI:

Artificial intelligence (AI) refers to an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation.

It goes on to state (in language that could do with an AI edit):

Machine learning are the patterns derived from training data using machine learning algorithms, which can be applied to new data for prediction or decision-making purposes.

Generative AI models generate novel content such as text, images, audio and code in response to prompts.

Key aspects of those descriptions are the absence of explicit programming, and the way machines learn from training data. These descriptions distinguish AI from complicated models using thousands of lines of code written by programmers to perform tasks such as optimizing air traffic control or managing a factory’s production control. Such models give consistent, replicable results (even though they are sometimes consistently wrong, such as the model driving Robodebt).

The Commonwealth’s paper covers the AI field broadly, and concludes with 20 consultation questions, with an invitation for people to make their responses to the Department of Industry, Science and Resources.

Martyn Goddard’s Policy Post provides a general introduction to AI in the context of its effect on productivity: Artificial intelligence will upend our economy. We’re utterly unprepared. He describes the difference between “narrow AI” – the AI that helps with trip planning, medical diagnosis and recommender systems – and “general AI” – the AI that is about solving unfamiliar tasks based on machines replicating human cognitive abilities.

Goddard’s contribution is mainly about how AI will displace labour in activities that so far have been comparatively untouched by 200 years of automation. The effect on labour productivity will be massive in almost all industries. The only industries to be comparatively lightly affected are hotels, restaurants, education, the arts and entertainment, where living human presence plays a strong role. He points out that with so much productivity unleashed and therefore with so much labour displaced, there should be means of distributing income that are less dependent on work, such as a universal basic income.

Stuart Russell of the University of California Berkeley has a 6-minute Ted Talk: How will AI change the world. He explains the difference between asking a human to do something and specifying that task as an objective to an AI system. When a human is given a task he or she makes a reasonable set of assumptions about the task – mainly subconsciously – and acts accordingly. Those assumptions have to be set as explicit constraints and boundary conditions in an AI instruction set. To illustrate the scope of the problem Russell describes the complexity of asking an AI system to go and get a cup of coffee.


AI answers its own questions – a chat with ChatGPT

Goddard’s post mentions agriculture as one of the industries most subject to job loss through the application of AI.

Curious about this finding, I asked ChatGPT: “How will AI affect agriculture in Australia?” Its response gives a list of seven ways that will happen. They are all general enough to be incontestable. They are devoid of numbers and they could apply to farming of any type in any country.

So I tried getting more specific, and asked “How will AI affect beef production in Australia's arid zone?”. Its response is still fairly general, but at least it wrote “Climate Resilience: Australia's arid zone is susceptible to climate change and extreme weather events”.

But there was still not much on the people who earn a living in the arid zone, who could lose their livelihood. So another try: “How will climate change affect people in the cattle industry in Australia's arid zone?” Little improvement. Although people are specifically mentioned in the question they are overlooked in the response.

If a student were set that third question he or she would go to many sources and pull it all together with logical connections between what he or she finds in those resources. If AI increases labour productivity what will happen to displaced workers, can they acquire the skills to use these new technologies, what will be the effect of heat stress on humans, how will those with long-term connections to the arid zone, particularly indigenous Australians, be affected? These would all be covered in a student’s assignment or for that matter a journalist’s story.

That is not to dismiss ChatGPT. It does well in identifying issues the student or journalist might have missed, such as the possible need to change the genetic lines of cattle in response to climate change. In fact for the writer seeking to expand knowledge of a subject, platforms such as ChatGPT might serve the function of stating the mainstream ideas, from which he or she could engage in critical analysis or an expansion into new areas.

But in terms of answering a basic university assignment or providing a story for a website it has a long, long way to go. Diligent students and journalists who have something useful to say needn’t feel threatened by AI for some time yet.

Confirming that view, Michael Timothy Bennett of ANU writes in The Conversation: No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye. He writes that AI:

…isn’t even close to the sort of artificial superintelligence that might conceivably pose a threat to humankind. The models underpinning it are slow learners that require immense volumes of data to construct anything akin to the versatile concepts humans can concoct from only a few examples. In this sense, it is not “intelligent”.

That criticism seems to be about systems such as ChatGPT, that are in the business of pulling together published material. Writing in the Lowy Institute’s TheinterpreterHow can we regulate AI? Let’s just ask it – Lydia Khalil tests ChatGPT by asking it “Can you offer policy recommendations to governments on how it can best regulate generative AI to help protect humanity and preserve democracy?”. It gives a neat list of 10 principles of regulation, such as “addressing bias and fairness”, but they are all in general terms. Like the Russian Constitution it’s a fine statement, but somewhat detached from reality.

People such as Stuart Russell, however, are writing about systems that have far more autonomy and that are commissioned to do much more complex tasks than ChatGPT, and that have more significant consequences.


Risks of AI

A number of AI specialists have signed an open letter asking scientists and governments to pause giant AI experiments. They write:

Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

They are no Luddites: they recognize that “humanity can enjoy a flourishing future with AI”, but they urge caution.

On the ABC’s 730 Report Stuart Russell warns that mitigating the risk of extinction from AI should be a global priority. (8 minutes)

That sounds more like a slogan from an Extinction Rebellion protester than the considered view of a Berkeley professor, but his explanation is logical and credible. We need to think carefully about what humans need and want before we delegate that task to machines. Oblivious to the assumptions we take for granted about good public policy, we could unwittingly ask AI to guide us to our own destruction. Those risks are amplified when people with malice towards groups or individuals write the specifications.

An example of specific malicious intent is provided by Archon Fung and Lawrence Lessig of Harvard University in a Conversation contribution: How AI could take over elections – and undermine democracy. They describe how AI systems could work on individuals and groups to slowly bring them around to supporting particular parties or policies through “reinforcement learning”. Reinforcement learning is already used by the advertising industry to help us learn that we need stuff that initially we disregard. Supercharged with the human-like skills of AI, reinforcement learning can be even more effective.

On a completely different level of risk, that can genuinely be classified as an existential threat to humanity, is the possibility that AI systems become cleverer than us and take over. You can watch a podcast hosted by the University of Toronto where Geoffrey Hinton – known as the “godfather of AI” – describes the emergence of AI and its possible development if it is left unchecked.

Hinton clearly distinguishes AI from human intelligence: it has a much better learning algorithm and is much better at spreading its learning. And it’s very different from the controlled logic of traditional computing. Most of his 46-minute session –The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI– is about how AI has advanced so quickly, well beyond his and other experts’ predictions. That pace worries him and it’s what led him to leave Google.

He doesn’t want to see the development of AI arrested. It has great potential for human welfare, not only in doing what we have always done more efficiently (improving productivity), but also in doing new things in health care (diagnosis, pharmaceutical development), development of materials, and weather forecasting, to name three areas. But we need to recognize that AI is on a path towards becoming smarter than us, and through global cooperation we must prevent it from taking control.

You can read a similar message in a post on the ABC website Tech world warns risk of extinction from AI should be a global priority like pandemics and nuclear war where Michael Vincent describes the work and concerns of the San Francisco-based Center for AI Safety.


Regulating AI – how the Europeans do it

The EU is taking firm steps toward regulating AI. In a 9-minute session on ABC Breakfast you can hear Bando Benifei of the European Parliament describe the EU’s Artificial Intelligence Act: EU takes major step towards regulating AI. The Act’s purpose is to mitigate risks arising from the use of AI and to guard against violation of people’s rights. He specifically mentions the need to guard against AI reinforcing human prejudices through measures such as emotional recognition (which can be used to manipulate people in schools and workplaces), predictive policing, creation of facial recognition databases, and biometric surveillance. Where AI is used there should be full disclosure, and there should be protection of the intellectual property rights of those who generate material from which AI learns.


Australians’ attitude to AI

The Essential Report of June 13 surveyed respondents on their attitudes to AI.

Most people (55 percent) believe there should be a ban on high-risk AI. Older people are particularly concerned about AI risks.

Only 12 percent of respondents believe that “the government should leave it up to the market to ensure that AI is developed ethically”. That doesn’t mean everyone else wants specific regulation of AI: only 48 per cent respond that “the government should create new laws to further regulate the development AI”, while the rest believe that existing laws on privacy should provide adequate protection. Younger people are even more relaxed about AI: only 33 percent of respondents aged 18-34 believe there should be new laws.

Essential also asked which AI applications would have a positive or negative impact. About 60 percent of respondents believe that AI will be positive for “medical development”, and there is some acceptance of facial recognition technology and automated work processes. Driverless cars get only 28 percent support, and respondents are even less enthusiastic about applications such as ChatGPT and the creation of virtual personalities.

It is apparent from these responses that there is a gulf between what experts think about AI and its ramifications, and the way the general public see it.