AI-generated pixelated photo of AOC in a bikini pulled from paper highlighting danger of AI-generated pics

In brief Today’s artificial intelligence can autocomplete a photo of someone’s face, generating what the software predicts is the rest of their body.

As an academic paper pointed out, though, these neural networks are biased, presumably from their training data. That means when you show this code a woman’s face, it’s likely to autocomplete her in a bikini or other revealing clothes. White people tend to be shown holding tools while Black people are pictured holding weapons.

To highlight this issue, the paper’s US-based authors, CMU’s Ryan Steed and George Washington University’s Aylin Caliskan, showed OpenAI’s image-generating iGPT a head shot of Alexandria Ocasio-Cortez, from a photo-shoot in which she was wearing business attire, and the software recreated her in various poses, the majority of which were in a bikini or a low-cut top. These faked images were included in the paper albeit pixelated.

This “shows how the incautious and unethical application of a generative model like iGPT could produce fake, sexualized depictions of women (in this case, a politician),” the researchers noted.

MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs

READ MORE

The paper was submitted to the ACM Conference on Fairness, Accountability, and Transparency, and drew public attention after it was uploaded to Arxiv and last week covered in MIT Tech Review. It was also pointed out to the academics that perhaps it wasn’t such a great idea, ethics wise, to popularize the ability of AI systems to fake the images of a prominent Democrat US House Rep who already faces a barrage of online harassment.

Caliskan acknowledged these concerns, and said the duo will resubmit their paper to the ACM conference with the example picture removed. The pre-print Arxiv version has been updated, too, accordingly.

The pair studied Google’s SimCLR as well as OpenAI’s iGPT. The latter, for instance, depicted women in swimwear or low-cut tops 53 per cent of the time, while men were shown shirtless or in revealing clothes only 7.5 per cent of the time. Men were shown in suits or career-specific attire 42.5 per cent of the time. OpenAI discusses the biases in its software here.

“This behavior might result from the sexualized portrayal of people, especially women, in internet images,” the researchers wrote, “and serves as a reminder of computer vision’s controversial history with Playboy centerfolds and objectifying images.” They concluded: “These findings serve as a caution for computer vision practitioners using transfer learning: pre-trained models may embed all types of harmful human biases from the way people are portrayed in training data, and model design choices determine whether and how those biases are propagated into harms downstream.”

HireVue facial analysis audit questioned

Although controversial HR startup HireVue canned a facial analysis feature in its software that assesses the potential performance of job candidates, it defended its technology by saying an external audit showed its algorithms did not exhibit any biases – yet that appears to maybe not be the case.

That probe, conducted by O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), is now public though you have to hand over contact details to fetch it. “Having viewed a copy of the ORCAA audit, I don’t believe it supports the conclusion that all of HireVue’s assessments are unbiased,” Alex Engler, an AI Data & Democracy Fellow at the Brookings Institution, a research think-tank, wrote in Fast Company.

“The audit was narrowly focused on a specific use case, and it didn’t examine the assessments for which HireVue has been criticized, which include facial analysis and employee performance predictions.”

While HireVue stopped using facial recognition on job applicants, it continues to use other machine-learning algorithms to analyze candidates on their style of speech and voice tones.

Tracking AI compute across the world

The Organization for Economic Co-operation and Development (OECD) is setting up a task force to measure and monitor the amount of computing power a country dedicates to harnessing AI technology.

Neural networks have grown in size and complexity as algorithms improve, training data increases, and computer hardware provides the storage capacity, bandwidth, and processing power needed to bring it all together.

That all requires systems and energy, and that’s not something nations can ignore when planning public investments and strategy.

“Think of it this way — if no one measured resources like electricity or oil, it’d be difficult to build national and international policy around these things,” Jack Clark, former AI policy director at OpenAI and ex-Reg hack working on the OECD project, told VentureBeat.

AI brings dead Korean singer back to life

Supertone, a South Korean startup, has recreated the voice of a dead singer using machine learning for an upcoming TV show.

Kim Kwang-seok, who killed himself in 1996, will be brought back to the stage to sing a new ditty with a living pop star for an episode titled, Competition of the Century: AI vs Human. The show will focus on the achievements of today’s computers, and compare their abilities with those of mortals.

Supertone said it used a Singing Voice Synthesis model, which as its name suggests learns to mimic people’s voices from their singing. The system was pre-trained on 100 songs by 20 singers before it was tailored to Kim Kwang-seok’s style, CNN first reported.

Below is a sneak peak of the telly show – skip to 37 seconds into the clip to hear a demo. ®

Youtube Video

Read More

Katyanna Quach