An MIT student asked AI to make her portrait more professional. She gave her lighter skin and blue eyes.

MIT student Rona Wang asked an AI imaging app called Playground AI to take a picture of her “professional” look. She gave her lighter skin and blue eyes and “made me look Caucasian”. Ron Wang

Rona Wang is no stranger to using artificial intelligence.

A recent MIT graduate, Wang, 24, has experienced the variety of new AI imagery and language tools that have emerged in recent years and is intrigued by the ways they can often go wrong. He even wrote about his ambivalence about technology on the school’s website.

Lately, Wang created LinkedIn profile photo of herself with AI portrait generators and received some bizarre results as images of herself with disjointed fingers and distorted facial features.

But last week, the result he got using a startup tool stood out from the rest.

On Friday, Wang uploaded a photo of herself smiling and wearing a red MIT hoodie to an image maker called Playground AI, and asked him to turn the image into a professional LinkedIn profile photo.

Within seconds, she produced an image nearly identical to her original selfie, except for Wang’s appearance been changed. It made her complexion lighter and her eyes blue, features that made me look Caucasian, he said.

I was like, Wow, does this thing think I should go white to become more professional? said Wang, that he is Asian-American.

The photo, which gained traction online after Wang shared it on Twitter, sparked a conversation about the shortcomings of AI tools when it comes to racing. He even caught the eye of the company’s founder, who said he hoped to fix the problem.

Now, she thinks her experience with AI could be a cautionary tale to others using similar technologies or pursuing careers in the field.

Wang’s viral tweet came amid a recent TikTok trend in which people have used AI products to spice up their LinkedIn profile photos, creating images that put them in professional attire and well-lit corporate settings.

Wang admits that when he tried using this particular AI, he had to laugh at the results at first.

It was pretty funny, she said.

But he also talked about a problem he’s repeatedly encountered with AI tools, which can sometimes produce troubling results when users experiment with them.

To be clear, Wang said, that doesn’t mean AI technology is malicious.

It’s kind of offensive, he said, but at the same time I don’t want to jump to the conclusion that this AI must be racist.

Experts have said that AI bias may exist beneath the surface, a phenomenon that has been observed for years. The data artifacts used to deliver findings may not always accurately reflect various racial and ethnic groups, or may reproduce existing racial biases, they said.

Research, including at MIT, found the so-called AI bias in language patterns that associate certain genders with certain careers or in oversights that cause facial recognition tools to malfunction for people with dark skin.

Wang, who majored in math and computer science and will be returning to MIT in the fall for an undergraduate degree, said her widely shared photo may have just been a blip, and it’s possible the program randomly generated the facial features of a white woman. Or, she said, it may have been trained using a series of photos where the majority of people depicted on LinkedIn or in professional scenes were white.

It got her thinking about the possible consequences of such a faux pas in a higher-stakes scenario, such as if a company used an AI tool to screen the most professional candidates for a job and leaned towards people who looked white.

I definitely think that’s a problem, Wang said. I hope people who are making software are aware of these biases and think about ways to mitigate them.

Program managers were quick to respond.

Just two hours after tweeting his photo, Playground AI founder Suhail Doshi he answered directly to Wang on Twitter.

Templates aren’t instructable that way, so it will pick anything generic based on the prompt. Unfortunately, they’re not smart enough, she wrote in response to Wang’s tweet.

Happy to help you get the hang of it, but it takes a little more effort than something like ChatGPT, he added, referring to the popular AI chatbot that produces large amounts of text in seconds with simple commands. [For what its worth]we were quite unhappy with it and hope to fix it.

In other tweets, Doshi said that Playground AI doesn’t support the use case of AI photo avatars, and that it definitely can’t preserve the identity of a face and restyle it or insert it into another scene as Wang had hoped.

Reached by email, Doshi declined to be interviewed.

Instead, he answered a list of questions with his own question: If I roll a die only once and get the number 1, does that mean I will always get the number one? Should I conclude based on a single observation that the dice are biased against the number 1 and he was trained to be predisposed to roll a 1?

Wang said he hopes his experience serves as a reminder that even as AI tools are becoming more popular, it would be wise for people to use them with caution.

There’s a culture where some people really put a lot of faith in AI and rely on it, he said. So I think it’s great to get people thinking about this, especially people who may have thought that AI bias was a thing of the past.

Loading…


#MIT #student #asked #portrait #professional #gave #lighter #skin #blue #eyes
Image Source : www.boston.com

Leave a Comment