You may have noticed that Bing AI got a big upgrade for its image creation tool last week (among other recent improvements), but it appears that after having taken this sizeable step forward, Microsoft has now taken a step back.
In case you missed it, Bing’s image creation system was upgraded to a whole new version – Dall-E 3 – which is much more powerful. So much so that Microsoft noted the supercharged Dall-E 3 was generating a lot of interest and traffic, and so might be sluggish initially.
There’s another issue with Dall-E 3 though, because as Windows Central observed, Microsoft has considerably reined in the tool since its recent revamp.
Now, we were already made aware that the image creation tool would employ a ‘content moderation system’ to stop inappropriate pics being generated, but it seems the censorship imposed is harsher than expected. This might be a reaction to the kind of content Bing AI users have been trying to get the system to create.
As Windows Central points out, there has been a lot of controversy about an image created of Mickey Mouse carrying out the 9/11 attack (unsurprisingly).
The problem, though, is that beyond those kinds of extreme asks, as the article makes clear, some users are finding innocuous image creation requests being denied. Windows Central tried to get the chatbot to make an image of a man breaking a server rack with a sledgehammer, but was told this violated Microsoft’s terms of using Bing AI.
Whereas last week, the article author noted that they could create violent zombie apocalypse scenarios featuring popular characters (that are copyrighted) with Bing AI not raising a complaint.
Analysis: Random censorship
The point is about censorship being an overreaction here, or this seemingly being the case going by reports, we should add. Microsoft left the rules too slack in the initial implementation, it appears, but has gone ahead and tightened things too much now.
What really illustrates this is that Bing AI is even censoring itself, as highlighted by someone on Reddit. Bing Image Creator has a ‘surprise me’ button that generates a random image (the equivalent of Google’s ‘I’m feeling lucky’ button, if you will, that produces a random search). But here’s the kicker – the AI is going ahead, creating an image, and then censoring it immediately.
Well, we suppose that is a surprise, to be fair – and one that would seem to aptly demonstrate that Microsoft’s censorship of the Image Creator has maybe gone too far, limiting its usefulness at least to some extent. As we said at the outset, it’s a case of a step forward, then a quick step back.
Windows Central observes that it was able to replicate this scenario of Bing’s self-censorship, and that it’s not even a rare occurrence – it reportedly happens around a third of the time. It sounds like it’s time for Microsoft to do some more fine-tuning around this area, although in fairness, when new capabilities are rolled out, there are likely to be adjustments applied for some time – so perhaps that work could already be underway.
The danger of Microsoft erring too strongly on the ‘rather safe than sorry’ side of the equation is that this will limit the usefulness of a tool that, after all, is supposed to be about exploring creativity.
We’ve reached out to Microsoft to check what’s going on with Bing AI in this respect, and will update this story if we hear back.
You might also like …
Bing AI is coming to Chrome, taking on Google Bard on its own turfMicrosoft kills off Edge features in a bid to beat ChromeFollowing Bing AI, Google could bring AI writing tools to Chromebooks
Censorship appears to be holding back the revamped Dall-E 3 image creation tool. Artificial Intelligence, Computing, Software TechRadar – All the latest technology news Read More