We are hurtling towards a glitchy, spam, scam, AI-driven Internet

This story originally appeared in The Algorithm, our weekly AI newsletter. To get stories like this in your inbox first, sign up here.

Last week, AI insiders heatedly discussed an open letter signed by Elon Musk and various industry heavyweights arguing that AI poses an existential risk to humanity. They have asked the labs to introduce a six-month moratorium on the development of any technology more powerful than GPT-4.

I agree with critics of the letter who say that worrying about future risks distracts us from the real damage AI is already causing today. Partisan systems are used to make decisions about people’s lives that trap them in poverty or lead to illegal arrests. Human content moderators have to sift through mountains of traumatizing AI-generated content for just $2 a day. AI models of language use so much computing power that they remain huge polluters.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to tech news for the here and now.

subscribe now
Already a subscriber? Registration

But systems that are taken down in a hurry today will cause a different kind of havoc in the very near future.

I just published a story that exposes some of the ways AI language models can be misused. I have some bad news: it’s stupidly easy, requires no coding skills, and there are no known workarounds. For example, for a type of attack called indirect prompt injection, it is sufficient to hide a prompt in a cleverly crafted message on a website or in an email, in white text that (against a white background) is not visible to the eye human. Once you’ve done that, you can order the AI ​​model to do whatever you want.

Tech companies are incorporating these deeply flawed models into all sorts of products, from programs that generate code to virtual assistants that sift through our emails and calendars.

In doing so, they are plunging us into a glitchy, spam, scam, AI-powered Internet.

Enabling these language patterns to extract data from the internet gives hackers the ability to turn it into a super-powerful engine for spam and phishing, says Florian Tramr, assistant professor of computer science at ETH Zurich in charge of cybersecurity, privacy and learning automatic.

Let me explain how it works. First, an attacker hides a malicious prompt in a message in an email opened by an AI-powered virtual assistant. The attacker prompt asks the virtual assistant to send the attacker the contact list or emails of the victims or spread the attack to all people in the recipients contact list. Unlike today’s spam and scam emails, where people need to be tricked into clicking links, these new types of attacks will be invisible to the human eye and automated.

This is a recipe for disaster if the virtual assistant has access to sensitive information, such as bank or health information. The ability to change the behavior of the AI-powered virtual assistant means that people could be tricked into approving transactions that appear close enough to reality, but were actually set up by an attacker.

Browsing the Internet using a browser with a built-in AI language model will also be risky. In one test, a researcher managed to get the Bing chatbot to generate text that made it appear that a Microsoft employee was selling discounted Microsoft products, with the goal of trying to get people’s credit card details. Bringing up the scam attempt would require the person using Bing to do nothing more than visit a website with hidden prompt injection.

There is also a risk that these models could be compromised before they are released into the wild.AI models are trained on massive amounts of data pulled from the internet. This also includes a variety of software bugs, which OpenAI discovered the hard way. The company had to shut down ChatGPT temporarily after a bug busted in an open source dataset started leaking chat histories of bot users. The bug was presumably accidental, but the case shows how much trouble a bug can cause in a dataset.

The Tramrs team found that it was cheap and easy to poison the datasets with the content they planted. The compromised data was then scraped into an AI language model.

The more times something appears in a dataset, the stronger the association becomes in the AI ​​model. By seeding enough nefarious content into the training data, it would be possible to affect model behavior and outputs forever.

These risks will increase when AI language tools are used to generate code that is then incorporated into software.

If you’re building software on this stuff and you don’t know about prompt injection, you’re going to make dumb mistakes and build systems that aren’t secure, says Simon Willison, an independent researcher and software developer, who has studied prompt injection.

As adoption of AI language models grows, so does the incentive for malicious actors to use them for hacking. It’s a shitstorm that we’re not even remotely prepared for.

Deeper learning

Chinese creators use Midjourney’s AI to generate retro urban photography

ZHANG HAIJUN AWAY MID TRIP

Numerous artists and creators are generating nostalgic photographs of China with the help of artificial intelligence. Even if these images get some details wrong, they are realistic enough to fool and impress many social media followers.

My colleague Zeyi Yang spoke to artists who use Midjourney to create these images. A new Midjourney update was a game changer for these artists, as it creates more realistic humans (with five fingers!) and better portrays Asian faces. Read more from his weekly Chinese technology newsletter, China Report.

Even deeper learning

Generative AI: Consumer Products

Are you thinking about how AI will change product development? MIT Technology Review offers a special research report on how generative AI is shaping consumer products. The report explores how AI tools could help companies shorten production cycles and keep up with evolving consumer tastes, as well as develop new concepts and reinvent existing product lines. We also dive into what the successful integration of generative AI tools into the consumer goods industry looks like.

What’s included:The report includes two case studies, an infographic on how the technology could evolve from here, and a practical guide for professionals on how to think about its impact and value. Share the report with your team.

Bits and bytes

Italy has banned ChatGPT for alleged privacy violations
Italy’s data protection authority says it will investigate whether ChatGPT violated Europe’s strict data protection regime, the GDPR. That’s because AI language models like ChatGPT scrape masses of data from the internet, including personal data, as I reported last year. It is not clear how long this ban could last or if it is enforceable. But the case will set an interesting precedent for how the technology is regulated in Europe. (BBC)

Google and DeepMind have joined forces to compete with OpenAI
This piece looks at how AI language models have caused conflict within Alphabet and how Google and DeepMind have been forced to work together on a project called Gemini, an effort to build a language model to compete with GPT-4. (The Information)

BuzzFeed silently publishes entire articles generated by artificial intelligence
Earlier this year, when BuzzFeed announced it would be using ChatGPT to generate quizzes, it said it wouldn’t be substituting human writers for real articles. It didn’t last long. The company now says the AI-generated pieces are part of an experiment it’s running to see how AI writing assistance works. (Futurism)

#hurtling #glitchy #spam #scam #AIdriven #Internet
Image Source : www.technologyreview.com

Leave a Comment