Google and OpenAI put the "art" in artificial intelligence with image creating bots
Mountain View, California - Google and OpenAI, one of Elon Musk's many companies, are one-upping each other with their image-making bots.
Google's Imagen and OpenAI's DALL-E 2 are examples of artificial intelligence that can make ridiculously detailed images if you give them just a sentence or two.
The race is naturally on.
Alan Resnick shared a glimpse of the power of DALL-E 2, which took three little words – "A bad photo" – and made a collection of images that sometimes really look like someone just took a bad pic.
Imagen then sped past its competition with shots that humans thought looked better than their counterparts from OpenAI's machine. Among them is a set of crisp and surreal images of animals doing human stuff posted by Raphaël Millière. We particularly like the swimming teddy bear
Each company is pushing the boundaries of what lines of code can do, and veering heavily towards the exit lane for Uncanny Valley.
It's all in the dataset
To make these magical pictures, artificial Intelligence has to train itself with gigantic amounts of data, called datasets.
OpenAI's DALL-E2 works with "clean" datasets that have safe for work content so that their AI never learned how to make NSFW stuff, but Google has gone a different route.
According to its official Imagen report, Google's AI uses a dataset "which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes."
That means Imagen may be learning how to be bigoted.
This opens the door to the same dystopian room a group of chemists accidentally entered earlier this year, when their AI came up with the recipe for the deadly VX nerve gas.
Tech titans are throwing their deep pockets at artificial intelligence, but there are already some glaring issues popping up, like building an AI that can use racist slurs.
Cover photo: Collage: 123RF/thesomeday123, oatloveit