‘AI can make mistakes’ – Forzan warns small business leaders
By Leon Gettler, Talking Business >>
ARTIFICIAL INTELLIGENCE (AI) is now the big challenge for businesses everywhere. In the last three years, businesses have moved from experimenting with AI to actively deploying it at scale, especially generative AI like Chat GPT.
Adoption has surged, but many companies still struggle to capture full value because they’re still learning.
Matthew Forzan, the founder of Yoghurt Digital, said one of the key problems was that AI can make mistakes. At the same time, however, the problem is compounded because it’s getting smarter all the time. 
Mr Forzan has nearly two decades of industry experience. He is a seasoned digital marketing professional with deep expertise in search engine optimisation (SEO), paid search and social marketing, and user experience.
“It’s good for quick form information – if you want to know what the capital of Australia is or the best place to travel in Melbourne for a coffee,” Mr Forzan told Talking Business. “All those types of things are really good but we’ve seen it have critical errors like how to make cornflakes or boil an egg. Things like that it’s still getting wrong.
“But this is the worst that it’s ever going to be,” he said.
“The problem is that it’s always outpacing things like legislation.”
Identify who is in the driver’s seat of AI companies
Mr Forzan said this meant it was important to look at who was in the “driver’s seat” of the companies behind AI.
“They obviously have a commercial interest there. We’re talking huge valuations, huge salaries of the people they’re recruiting in Silicon Valley,” he said.
“I dare say their interests are potentially not for the human race than for themselves, which opens up some problems too.”
Mr Forzan said different AI platforms were used for different purposes.
“If you look at AI Mode, in the context it’s being used, it’s typically more about finding access to information like websites and things,” he said
“Whereas (with) something like Chat GPT, a lot of people are using it as a therapist or a companion.
“I think that’s where it can start to go off the rails because there’s no layer or lens of protection or validation. I think a lot of children might be picking that up, acting on it, taking it as concrete advice.”
AI has been involved in self-harm cases
Children using ChapGPT and other AI large language models, has unfortunately led to instances around the world of self-harm.
“With children having access to these things, there is a risk they can be influenced in unforeseen ways,” Mr Forzan said.
“And because something is unregulated, left to their own devices, they can be getting incorrect advice and I think there’s an inherent risk there when it’s left unmonitored.”
He said the social media bans for under-16s was a good step forward.
However, he said education in schools had an important role to play as well. There was also scope to better educate parents on teaching their kids how to use AI.
“It’s a fantastic tool when used correctly. It’s a potentially dangerous tool when not used correctly,” Mr Forzan said.
“I don’t think the answer is to not use it at all. It think it should be used at a point of maturity of the individual, and also with the support of their parents and the schooling.” 
Hear the complete interview and catch up with other topical business news on Leon Gettler’s Talking Business podcast, released every Friday at www.acast.com/talkingbusiness
https://shows.acast.com/talkingbusiness/episodes/talking-business-42-interview-with-matthew-forzan-from-yoghu
ends
How to resolve AdBlock issue?