Just chipping in with a technicical answer - a model can know what thing A is, also be shown a thing B, and compose the two. Otherwise models would never be able to display anything that doesn’t exist yet.
In this particular case, there’s stock imagery of children online, and there’s naked adults online, so a model can combine the two.
This case seems to be AI fear mongering, the dude had actual CP…
To test DALL·E’s ability to work with novel concepts, the researchers gave it captions that described objects they thought it would not have seen before, such as “an avocado armchair” and “an illustration of a baby daikon radish in a tutu walking a dog.” In both these cases, the AI generated images that combined these concepts in plausible ways.
Removed by mod
Just chipping in with a technicical answer - a model can know what thing A is, also be shown a thing B, and compose the two. Otherwise models would never be able to display anything that doesn’t exist yet.
In this particular case, there’s stock imagery of children online, and there’s naked adults online, so a model can combine the two.
This case seems to be AI fear mongering, the dude had actual CP…
Removed by mod
Your claims backbone is that models don’t know the differences between a child’s naked body and an adults, yes?
What happens if you ask chat gpt “what are the anatomical differences between human child and adult bodies?”
I’m sure it’ll give you an accurate response.
https://www.technologyreview.com/2021/01/05/1015754/avocado-armchair-future-ai-openai-deep-learning-nlp-gpt3-computer-vision-common-sense/
Removed by mod
👆 Sarcasm and condescension