Bored Apes Developers Abandon US-UK Military Training System

Bored Apes Developers Abandon US-UK Military Training System

Source Node: 1772253

Have you ever posted pictures of yourself on social media platforms like Facebook, Instagram, Tik Tok and others? If so, it maybe time to reconsider those postings.

This is because a new AI image-generation technology now permits users to save a bunch of photos and video frames of you and then train it to originate “realistic” fakes of your photo depicting you in downright embarrassing positions, illegal and sometimes compromising positions.

Well, not everyone is at risk but the threat is real.

Although photographs have always been prone to manipulation and falsification from the era of darkrooms where films were manipulated with scissors and pasted right through to photoshopping of pixels today.

While it was a daunting task and required a measure of specialist skills in those days, these days originating convincing photorealistic fakes has been made too easy.

First an AI model must learn how to render or synthesize an image of someone into a photo from a 2D or 3D model via  software. Once the image is successfully rendered, the image naturally becomes a plaything for the tech and has capacity to generate infinite quantities of images.

When one chooses to share the AI model, other people can also join in and start creating images of that person as well.

AI Tech That Creates Life-Wrecking Deep Fake Images

Real or AI-generated?

Social Media Case studies

A volunteer described as “brave” by Ars Technica, a tech publication, who had initially allowed the company to use his images to create fakes, had a change of heart.

This is because in no time, the results of rendered images from the AI Model were too convincing and too damaging reputation-wise for the volunteer.

Given the high reputational risk, an AI-generated fictitious person, John, became a natural choice.

John, the fictitious guy, was an elementary school teacher, who like many other people have posted his images on Facebook at work, chilled at home and at some such event.

The largely inoffensive images of “John” were rendered and then used to train the AI to put him in more compromising positions.

From only seven images, the AI could be trained to generate images that make it appear as if John lives a double and secret life. For instance, he appeared as somebody who enjoyed posing nude for selfies in his classroom.

At night, he went to bars looking like a clown.

On weekends, he was part of an extremist paramilitary group.

The AI also created the impression that he had done time in prison for an illegal drug charge but had concealed this fact from his employer.

In another picture, John, who is married is seen posing beside a nude woman who is not his wife in an office.

Using an AI image generator called Stable Diffusion (version 1.5) and a technique called Dreambooth, Ars Technica was able to train the AI how to generate photos of John in any style. Although John was a fictitious creation, anyone theoretically could achieve the same set of results from five or more images. These images could be plucked from social media accounts or taken as still frames from a video.

The process of teaching the AI how to create images of John took about an hour and was free of charge thanks to a Google cloud computing service.

When training was completed, creating the images took several hours, the publication said. And this was not because generating the images was a somewhat slow process but because there was need to comb through a number of “imperfect pictures” and using a “trial-and-error” sort of prompting to find the best images.

The study found that it was remarkably much easier compared to attempting to create a photo realistic fake of “John” in Photoshop from scratch.

Thanks to technology, people like John can be made to look as if they acted illegally, or committed immoral acts such as housebreaking, using illegal drugs and taking a nude shower with a student. If the AI models are optimized for pornography, people like John can become porn stars almost overnight.

One can also create images of John doing seemingly inoffensive things that can be devastating if he is shown imbibing at a bar when he has pledged sobriety.

It doesn’t end there.

A person can also appear in a lighter moment as a medieval knight or an astronaut. In some cases, people could either be both young and old or even dress up.

However, the rendered images are far from perfect. A closer look can out them as fakes.

The downside is the technology that creates these images has been upgraded significantly and could make it impossible to distinguish between a synthesized photo and a real one.

Yet despite their flaws, the fakes could cast shadows of doubt about John and potentially ruin his reputation.

Of late, a number of people have used this same technique (with real people) to generate quirky and artistic profile photos of themselves.

Also commercial services and apps like Lensa have mushroomed that handle the training.

How does it work?

The work on John might seem remarkable if one has not been following trends. Today, software engineers know how to create new photorealistic images of anything one can imagine.

Apart from photos, the AI has controversially allowed people to create new artwork that clone existing artists’ work without their permission.

Suspended due to ethical concerns

Mitch Jackson, a US technology lawyer expressed concern over the proliferation of deep fake technology on the market and says he will be studying technology’s legal impacts in most of 2023.

Distinguishing between what’s real and what’s fake will eventually become impossible for most consumers.”

Adobe already has audio technology called Adobe VoCo that allows anyone to sound exactly like someone else. Work on Adobe VoCo was suspended due to ethical concerns, but dozens of other companies are perfecting the technology, with some offering alternatives today. Take a look, or listen, for yourself,” Mitchum said.

Pictures and video versions of deep fake videos are getting better and better, he says.

“Sometimes, it’s impossible to tell the fake videos from the real ones,” he adds.

Stable Diffusion uses deep-learning image synthesis model that can create new images from text descriptions and can run on a Windows or Linux PC, on a Mac, or in the cloud on rented computer hardware.

Stable Diffusion’s neural network has with the help of intensive learning mastered to associate words and the general statistical association between the positions of pixels in images.

Because of this, one can give Stable Diffusion a prompt, such as “Tom Hanks in a classroom,” and it will give the user a new image of Tom Hanks in a classroom.

In Tom Hank’s case, it is a walk in the park because hundreds of his photos are already in the data set used to train Stable Diffusion. But for making images of people like John, the AI will need a bit of help.

That’s where Dreambooth kicks.

Dreambooth, which was launched on August 30 by Google researchers, uses a special technique to train Stable Diffusion’s through a process called “fine tuning.”

At first, Dreambooth was not associated to Stable Diffusion, and Google had not made its source code available amid fears of abuse.

In no time, someone found a way to adapt the Dreambooth technique to work with Stable Diffusion and released the code freely as an open source project, making Dreambooth a very popular way for AI artists to teach Stable Diffusion new artistic styles.

Worldwide impact

An estimated 4 billion people worldwide use social media. As many of us have uploaded more than a handful photos of ourselves, we could al become vulnerable to such attacks.

Although the impact of the image-synthesis technology has been depicted from a man’s point of view, women also tend to bear the brunt of this as well.

When a woman’s face or body is rendered, her identity can get mischievously inserted into pornographic imagery.

This, has been made possible by the huge number of sexualized images found in data sets used in AI training.

In other words, this means that the AI is all too familiar with how to generate those pornographic images.

In a bid to address some of these ethical issues, Stability AI was forced to remove NSFW material from its training data set for its more recent 2.0 release.

Although its software license bars people from using the AI generator to make images of people without their permission, there is very little to no potential for enforcement.

Children are also not safe from synthesised images and could be bullied using this technology even in cases where pictures are not manipulated.

AI Tech That Creates Life-Wrecking Deep Fake Images

Made by humans?

Is there anything we can do about it?

The list of things to do varies from person to person. One way is taking the drastic step of removing images offline all together.

While that may work for ordinary people, it’s not much of a solution for celebrities and other public figures.

However, in the future, people may be able to protect themselves from photo abuse through technical means. Future AI image generators could be compelled legally to embed invisible watermarks into their outputs.

That way, their watermarks can be read later and make it easy for people to know they are fakes.

Extensive regulation is necessary. Any piece of manipulated or fake content should be required to prominently display a letter or warning, much like the movie (G, PG, R, and X). Maybe something like Digitally Altertered or DA, Mitchum says.

Stability AI launched its Stable Diffusion as an open source project this year.

To its credit, Stable Diffusion already uses embedded watermarks by default, but people accessing its open source version tend to go around it by either disabling the watermarking component of the software or removing it entirely.

MIT to mitigate

Although this is purely speculative, a watermark added voluntarily to personal photos, may be able to disrupt the Dreambooth training process. A group of MIT researchers said PhotoGuard, an adversarial process that aims to protect and safeguard AI from synthesizing an existing photo through minor modifications through the use of an invisible watermarking method. This, however is only limited to AI editing (often called “inpainting”) use cases and does include training or generation of images.

AI is taking over writing & painting! Deep fakes will ruin video!
Good.
That means live performance becomes even MORE valuable. Tradeshows will thrive. Humans want to do business with humans.
Meatspace is still bestspace Jonathan Pitchard says.

Of late, there has been a proliferation of AI technology that write poems, rhymes and songs. And some that are masterings games.

Critics have taken the technological advancements negatively and believe AIs are taking over human jobs.

/MetaNews.

Time Stamp:

More from MetaNews