AI Has a Stunning Ability to Supply Information — But Can It Be Harnessed for Harm by Bad Actors?
- Written by Times Media

Artificial intelligence has become one of the most extraordinary technological leaps of the 21st century. In just a few years, generative AI systems have gone from experimental curiosities to powerful tools capable of producing human-level writing, analysing vast volumes of data, automating entire workflows, and supporting complex decision-making across medicine, logistics, science and public administration.
For most Australians, this progress feels overwhelmingly positive. AI can help diagnose disease, reduce road fatalities, improve customer service, streamline government services, and boost productivity in a nation grappling with labour shortages. It can save time, reduce costs, and simplify processes that once required teams of experts.
But alongside this remarkable potential sits a more unsettling reality: the same tools that empower communities, small businesses and scientists can also be exploited by those with harmful intent. As AI’s capabilities grow, so does the conversation about whether bad actors — from criminals to foreign adversaries — could harness these systems in ways that threaten safety, stability and trust.
This article explores that duality: the stunning capability of AI to supply information and insight, and the escalating concern about how easily it could be weaponised.
AI’s Power Lies in Its Ability to Scale Human Knowledge
At its core, AI is a multiplier of human capability. What once required hours of research can now be produced in seconds. What once demanded specialised training — data modelling, coding, statistical analysis, translation — can now be executed by anyone with a smartphone.
This accessibility is AI’s greatest achievement, but also its greatest vulnerability.
AI models can:
-
Analyse massive datasets to reveal patterns invisible to humans
-
Generate detailed reports, code, essays, summaries and translations instantly
-
Provide step-by-step guidance on topics once restricted to experts
-
Automate communications, interactions and decision flows at unprecedented scale
For legitimate users, these features are transformative.
For bad actors, they can be dangerously enabling.
Could AI Be Used to Spread Manipulation and Disinformation? Absolutely.
One of the most widely recognised risks is AI-driven disinformation. Generative AI can create:
-
Convincing fake news articles
-
Realistic but fabricated audio and video
-
Chatbots posing as real individuals
-
Targeted political messaging
-
Manufactured social-media movements
Australia has already seen glimpses of online campaigns driven by coordinated inauthentic activity. The difference now is scale.
What once required teams of human operators can be managed by a handful of bad actors using automated agents capable of generating thousands of personalised posts per hour.
Deepfakes present an even more alarming frontier: false videos of politicians, business leaders or celebrities could influence elections, markets or public responses to emergencies.
Democracies everywhere — including Australia — must prepare long before this becomes a crisis rather than a concern.
Cybercrime: AI as a Force Multiplier for Hackers
Cybercriminals thrive on automation, and AI is supercharging their capabilities.
AI can help criminals:
-
Write sophisticated phishing emails free of spelling or grammar errors
-
Mimic the writing style of colleagues or executives to increase scam success
-
Generate malware code or identify software vulnerabilities
-
Test thousands of attack vectors at machine speed
-
Improve the social engineering scripts that trick victims into payment or access
Australians are already increasingly vulnerable as scams grow more personalised. Many cybersecurity experts warn that AI-enhanced fraud could be virtually indistinguishable from legitimate communication.
The risk is not hypothetical — it is active and evolving.
AI in the Hands of Extremists or Hostile States
Beyond criminal activity, there are national-security risks.
Bad actors could potentially use AI systems to:
-
Analyse critical infrastructure vulnerabilities
-
Automate reconnaissance on government networks
-
Assist in planning attacks
-
Generate propaganda tailored for radicalisation
-
Accelerate research into biological, chemical or cyber-weapons
While responsible AI companies implement safeguards to restrict dangerous outputs, no system is perfect — and open-source models can be modified to bypass guardrails entirely.
Australia’s intelligence community has already warned that foreign adversaries view AI as a strategic asset. If left unchecked, the technology could shift the balance of power in ways that undermine democratic institutions and national stability.
Economic Harm: Market Manipulation and Financial Abuse
Bad actors don’t need weapons to cause widespread harm. They can use AI to disrupt economic systems.
Potential misuse includes:
-
AI-generated financial scams
-
Automated pump-and-dump schemes
-
Fake analyst reports influencing share prices
-
AI bots manipulating crypto markets
-
Fabricated legal or regulatory documents
Markets rely heavily on trust and signal accuracy. AI-driven manipulation could distort that trust within seconds.
AI Can Also Be Used for Harassment, Identity Theft and Personal Harm
On a micro level, individuals may be targeted through:
-
AI-generated revenge porn or sexually explicit deepfakes
-
Identity replication for fraud
-
Personalised harassment campaigns
-
Stalking aided by predictive data tools
-
Automated creation of defamatory content
These threats affect not just public figures, but everyday Australians — especially young people.
Where Do Solutions Come From? Regulation, Collaboration and Technology
Preventing AI from being used maliciously requires a multi-layered approach.
1. Strong, enforceable regulation
Governments must implement frameworks that:
-
Define acceptable use
-
Require transparency for high-risk systems
-
Mandate safety audits
-
Penalise misuse
-
Support law-enforcement capability
The EU, US and UK have taken first steps; Australia is developing its own framework but must act quickly.
2. Industry-wide safety standards
Tech companies need shared guardrails so that safety does not become a competitive disadvantage. This includes:
-
Red-team testing
-
Misuse detection
-
Responsible data sourcing
-
Mandatory watermarking of AI-generated content
3. Public education
Australians must become AI-literate, much like they became cyber-literate. Understanding risks reduces vulnerability.
4. International cooperation
Threats will not respect borders. Neither can solutions.
The Duality of AI: Stunning Capability, Serious Vulnerability
AI is neither inherently good nor inherently evil. It is a powerful tool — one of the most powerful humanity has ever created — and its impact depends on who controls it, how it is used, and how society adapts.
Used responsibly, AI can elevate productivity, empower small businesses, advance science, support democracy and enrich everyday life.
In the wrong hands, it can distort reality, undermine trust, accelerate crime, and destabilise institutions.
The challenge now is not to slow AI’s progress, but to guide its trajectory. Australia — like every nation — must ensure that innovation does not outrun safety, that openness does not invite exploitation, and that this extraordinary technology remains a tool for empowerment, not a weapon for harm.
As we stand at the edge of a new technological era, one truth is clear: the future of AI will be determined not by the machines, but by the people who wield them.
















