Google has built a system called Duplex that can phone a local restaurant, make reservations, and fool the person on the other end of the line into thinking the caller is a real person.
During
the summer before the 2016 presidential election, John Seymour and
Philip Tully, two researchers with ZeroFOX, a security company in
Baltimore, unveiled a new kind of Twitter bot. By analyzing patterns
of activity on the social
network, the bot learned to fool users into clicking on links in
tweets that led to potentially hazardous sites.
The
bot, called SNAP_R, was an automated “phishing” system, capable
of homing in on the whims of specific individuals and coaxing them
toward that moment when they would inadvertently download spyware
onto their machines. “Archaeologists believe they’ve found the
tomb of Alexander the Great is in the US for the first time:
goo.gl/KjdQYT,” the bot tweeted at one unsuspecting user.
Even
with the odd grammatical misstep, SNAP_R succeeded in eliciting a
click as often as 66 percent of the time, on par with human hackers
who craft phishing messages by hand.
The
bot was unarmed, merely a proof of concept. But in the wake of the
election and the wave of concern over political hacking, fake news
and the dark side of social networking, it illustrated why the
landscape of fakery will only darken further.
The
two researchers built what is called a neural network, a complex
mathematical system that can learn tasks by analyzing vast amounts of
data.
A
neural network can learn to recognize a dog by gleaning patterns from
thousands of dog photos. It can learn to identify spoken words by
sifting through old tech-support calls.
And,
as the two researchers showed, a neural network can learn to write
phishing messages by inspecting tweets, Reddit posts, and previous
online hacks.
Today,
the same mathematical technique is infusing machines with a wide
range of humanlike powers, from speech recognition to language
translation. In many cases, this new breed of artificial
intelligence is also an ideal means of deceiving large numbers of
people over the internet. Mass manipulation is about to get a whole
lot easier.
Sign
up for Science Times
We’ll
bring you stories that capture the wonders of the human body, nature
and the cosmos.
“It
would be very surprising if things don’t go this way,” said
Shahar Avin, a researcher at the Center for the Study of Existential
Risk at the University of Cambridge. “All the trends point in that
direction.”
Business Standard
No comments:
Post a Comment