----------------------------------------------------------------------------------
@MSGID: 29423.politicf@1:2320/105 2ba8f40c
@TZUTC: -0500
@PID: Synchronet 3.20a-Linux master/acc19483f Apr 26
202 GCC 12.2.0
@TID: SBBSecho 3.20-Linux master/acc19483f Apr 26
2024 23:04 GCC 12.2.0
@BBSID: CAPCITY2
@CHRS: ASCII 1
AI can now clone your personality in only two hours and that`s a dream for
deepfake scammers
Date:
Sun, 24 Nov 2024 14:00:00 +0000
Description:
A new study shows that AI can now convincingly clone a person`s personality
in only two hours.
FULL STORY
You might think your personality is unique, but all it takes is a two-hour
interview for an AI model to create a virtual replica with your attitudes and
behaviors. Thats according to a new paper published by researchers from
Stanford and Google DeepMind. What are simulation agents? (Image credit:
Getty Images / Mikkelwilliam) Simulation agents are described by the paper as
generative AI models that can accurately simulate a person`s behavior `across
a range of social, political, or informational contexts`.
In the study, 1,052 participants were asked to complete a two-hour interview
which covered a wide range of topics, from their personal life story to their
views on contemporary social issues. Their responses were recorded and the
script was used to train generative AI models or simulation agents for each
individual.
To test how well these agents could mimic their human counterparts, both were
asked to complete a set of tasks, including personality tests and games.
Participants were then asked to replicate their own answers a fortnight
later. Remarkably, the AI agents were able to simulate answers with 85%
accuracy compared to the human participants.
Whats more, the simulation agents were similarly effective when asked to
predict personality traits across five social science experiments.
While your personality might seem like an intangible or unquantifiable thing,
this research shows that it`s possible to distill your value structure from a
relatively small amount of information, by capturing qualitative responses to
a fixed set of questions. Fed this data, AI models can convincingly imitate
your personality at least, in a controlled, test-based setting. And that
could make deepfakes even more dangerous.
The research was led by Joon Sung Park, a Stanford PhD student. The idea
behind creating these simulation agents is to give social science researchers
more freedom when conducting studies. By creating digital replicas which
behave like the real people theyre based on, scientists can run studies
without the expense of bringing in thousands of human participants every
time.
You can have a bunch of small yous running around and actually making the
decisions that you would have made.
They may also be able to run experiments which would be unethical to conduct
with real human participants. Speaking to MIT Technology Review , John
Horton, an associate professor of information technologies at the MIT Sloan
School of Management, said that the paper demonstrates a way you can use real
humans to generate personas which can then be used
programmatically/in-simulation in ways you could not with real humans.
Whether study participants are morally comfortable with this is one thing.
More concerning for many people will be the potential for simulation agents
to become something more nefarious in the future. In that same MIT Technology
Review story, Park predicted that one day you can have a bunch of small yous
running around and actually making the decisions that you would have made.
For many, this will set dystopian alarm bells ringing. The idea of digital
replicas opens up a realm of security, privacy and identity theft concerns.
It doesnt take a stretch of the imagination to foresee a world where scammers
who are already using AI to imitate the voices of loved-ones could build
personality deepfakes to imitate people online.
This is particularly concerning when you consider that the AI simulation
agents were created in the study using just two hours of interview data. This
is much less than the amount of information currently required by companies
such as Tavus , which create digital twins based on a trove of user data.
======================================================================
Link to news story:
https://www.techradar.com/computing/artificial-intelligence/ai-can-now-clone-y
our-personality-in-only-two-hours-and-thats-a-dream-for-deepfake-scammers
$$
--- SBBSecho 3.20-Linux
* Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
SEEN-BY: 50/109 103/705 104/119 116/17 18 120/616
124/5016 153/757 154/10 30
SEEN-BY: 154/50 700 203/0 220/20 90 221/0 6
226/18 44 50 229/310 240/1120
SEEN-BY: 240/5832 280/464 5003 5006 292/854 8125
301/1 310/31 341/66 234
SEEN-BY: 396/45 423/120 460/58 467/888 633/280
712/848 770/1 2320/0 105 304
SEEN-BY: 2320/401 3634/12 5000/111 5005/49 5020/400
715 846 848 1042 4441
SEEN-BY: 5020/12000 5030/49 1081 5061/133 5075/128
5083/444
@PATH: 2320/105 154/10 280/464 467/888 5020/1042
4441