
In January this year, the Australian Government issued a response to feedback on its Safe and Responsible AI in Australia discussion paper. In that response, the Government committed to “consulting further on options for introducing new regulatory guardrails ….”. Both industry and community has called for regulatory frameworks that would adequately address perceived risks of AI. In its response the Government signalled that, while it would be implementing “guardrails” to deal with “high-risk” AI activities, it was also keen to see AI activities in low-risk environments continue to develop unimpeded.
So where does this leave the world of engagement? Since there are very few “low risk” AI activities as defined by the Government, it would seem we can expect to see regulatory interventions introduced over coming months and years.
Will these interventions be sufficient to address the growing lack of trust between communities, governments and organisations? How can we make sure that Government doesn’t accidentally throw the baby out with the bathwater when it comes to AI and technology that could potentially improve and enhance communication? And what do we do in the meantime?
We will continue to lead the conversation with cross-discipline practitioners who work at the intersection of technology and engagement. We have had fascinating discussions with a broad range of specialists working in the built environment, from architects, developers, politicians, government executives and community leaders to explore the future of AI – what opportunities and challenges await us and what should we do now to make the most of the opportunities and avoid pitfalls?
Some of the questions we have posed include how will AI:
© 2022 All Rights Reserved.
ECF is the trading name of Forestville Communications Ltd. Registered in England. 11329697