The last days i had many discussions about the topic of AI image generators and art. With tools like DALL-E, Midjourney or Stable Diffusion i can create images just with a short description and some parameters, although i think AI is a misleading term here. The algorithms are not intelligent or sentient, although they combine content. It feels more like a remix generator feeded from existing knowledge (the training data). Since the beginning of the month i got a Midjourney Subscription to test one of the generators out and i used it pretty often to get a feeling for the process and potential. The surreal images and the open social nature of Midjourney is really interesting and it feels kinda unique but after some time i am not really sure what to think about the image generators if i want to have a sophisticated opinion, but one thing is for sure: they are a gamechanger.

Interestingly, for me some arguments in these discussions were kinda reminiscent with the dawn of photography and the question if photography is art1. Some arguments are even the same that happend (and are happening) in the photography scene2. It is a weird iteration of old topics and i think there are many biases in that discussion, however the longer i looked into it the more i get the feeling it is just a mirror of human experiences.

The biggest concern i have (and why i am reluctant to insert the images i generated on my site) is the question of copyright. Stable Diffusion, the Open Source variant of the image generators, showed this when they published their training data. In an very interesting article3 it gets clear that almost none of the artists in the massive dataset on which Stable Diffusion was trained on gave consent.

Nearly half of the images, about 47%, were sourced from only 100 domains, with the largest number of images coming from Pinterest. Over a million images, or 8.5% of the total dataset, are scraped from Pinterest’s CDN.

I think this is no difference with DALL-E or Midjourney. Some pictures may have an open licence but the crucial point is probably the data sets that are provided by other companies. We will see very interesting court cases with this and as long as there is no good regulation existent i am reluctant to use an image generator outside of a private use case. Companies like Midjourney may give me the licence to use the generated image but no one will help me if an artist (or shady law firm) sues me for using an image generated from the artist’s work from training data.4 I also doubt the ‘uniqueness’ of the image results. For example, I saw that with some prompts when i used ‘snow queen’ and the results just were Elsa from Disneys ‘Frozen’ or an iteration from the same dark fantasy pic of the snow queen. The training data didn’t seem to have much tagged with the words snow queen. With that in mind, I have huge problems to claim that the algorithm generated some new art although the art is already limited through the keyword used. Especially if i start seeing identifiable things creeping through and can exactly guess the basis data here (this is a big problem in my opinion when you start reselling the images for profit in some form).

So what would be a good solution here? I really don’t know. Given the nature of these kinds of things it’s probable that attribution would be essentially impossible (although DARPA and Nvidia teamed up to make “forensics tools” that could detect this5) but i would more like to see that the artists gave at least permission, and if someone use it commercially, then there has to be a cut/royalty/patronage - because in the end their art, ideas, etc. were used in the training model. I could also imagine a protective system that you can include into your art / images to prevent your work from entering a system like Midjourney for processing.6

Aside from the legal issues it would also be helpful to be able to research the training data to check the biases in the data7. I already have have my own bias in my assumption that the model or dataset would actually have properly labeled imagery and good training on the words I used which is simply not the case. Hours and hours of parameter modification and exchanging words are gone just to get the result i wanted. On the bright side i will be a walking thesaurus at the end of the month.

But is it really art when i generated a very good piece with just a fitting prompt or is it just exploitation of artists? Regarding the art, i think that really depends on your viewpoint8. There is simply no good definition and humans discuss questions like “What is art and what is beauty?” since thousand of years and everytime when a new technique arrives the goalposts are moved. People much smarter than me have made careers out of examining that question. So i can only give my humble opinion here: Personally the results from the generator are not fine art for me, even if it looked like it. There are still visible artefacts present like the eyes or simply the concept of a person (or hands or fingers, etc) holding things. It also seems to be ‘fast art’ for consumption, but that is no fault of the generator if the data is just filled with art like this or if the majority demands it. However, i see it as a great tool for concept artists and testing out ideas. I also don’t think that “the emotion of the artist transmitted via the paintbrush stroke is lost” or “the suffering of the artist, the “essence” of the art is removed”. The trope of the suffering artist is rather disturbing in my opinion. The question for me is more who the real artist is: the prompter who put hours into tweaking the prompt? The image generator? The artists in the training data? For fun i created a prompt of an Albrecht Dürer Potrait as Cyberpunk-Hacker which looked really good, but i would never consider myself as the artist who created the image.9 I’ve seen quite a few people now posting AI generated art as their own “digital art” with no manipulation or just a slight tonal shift in Photoshop. I think the problem is that we generally assume that people who share art also made it and you have to realize that it would be more honest to say “I commissioned an AI to do this”. People also have a bias against AI if they know it was created by one10.

Which leads me to the exploitation of artists and i would say definitely yes there is exploitation and i find the reactions rather creepy. This will change the job of illustrators dramatically and they get replaced by people who make good prompts? This sounds like straight out of an cyberpunk novel where the low life artisan has a small advantage on the art market because he/she knows the names of art genres or artists. There are already people who offer their work for prices that are way below minimum wage11. It boggles my mind because no one would want to work below minimum wage, and especially not with something that is not a ‘menial’ job, but takes skill and years of practice. And the “adapt or die!"-mentality that comes with this is really putting me off. Do we really expect artists to become fixer of prompts for a minimum wage and release art into the world only to get copied instantly? Honestly, i can really understand if artists get reluctant to produce new art now. Even i, after spending so much hours into the generator, felt kinda numb and confused, especially if i see how good it can produce photographs12. And then some people say “No, don’t feel afraid, we need you. We need you to feed the machine!”. It makes me shudder to think that an artist puts all the skills into the work, releasing it only to have people generating dozens of variants within minutes.

In the end i think it is all about the question of authenticity, to show what is genuine and authentic about us and our unique selves. Which leads me finally to the debate of “Should you edit your pictures?” where AI is also playing a bigger role here. The discussion is a bit weird for me, because it seems to me that people have a different understanding of what edit means and why some people want to edit pics as few as possible. I mean there is always a manipulation in the analog photo process depending on how rigid we are in the definition of manipulation. The focal length of a lens will in fact manipulate the captured subject. This is most noticeable in the extremes of ultrawide and telephoto distortion. The lab is also editing the results in a way due to to chemicals used or the scanning process. It is ridiculous to assume an “authentic” picture or print exists. I think the definition of edit and alterations of pictures are just off. For some an edit is also the dust removal or cropping, for others it starts with hefty alterations. For me, editing is when i put additional stuff into the picture that was not present like artifical clouds. I also try to edit my pictures as rarely as possible though. The kick for me is to get my desired picture just with the used film, composition, filter and lightning tricks and not with Adobe Lightroom that uses algorithms to put dramatic clouds on top which never existed in the first place. However this is just my personal taste; other people used double negatives in darkroom to copy/paste certain clouds and i think this is also creative and genuine. It is just simply not my cup of tea to edit a pic so hefty. For me, too much of alterations makes a pic feel overburdened and kinda empty.

Some AI projects are really cool though and i would use it to some degree. In the pic from the ferris wheel, i couldn’t get all the dust of the polaroid and removing the dust via GIMP would have taken a long time. There is an open source project that could remove all the dust via AI13. That looks really promising but sometimes i am not sure if want to use it. For the ferris wheel polaroid though i really like the effect that the dust produces. It now looks like i made the pic in the night and the dust are stars, although i took it on broad daylight. But for some other pics i would gladly use it and i don’t think that dust removal make a pic less ‘authentic’.

So what will we see next? I guess we will see a similar reaction like the impressionists had with the invention of photography14. ‘Authentic’ art will also play a bigger role, which in a way also means to not play the rigged game and ignoring stuff and people that don’t inspire15. That is something i also try to hold on to. I do photography for myself and not to impress masses or be part of any current trend. It doesn’t have to resonate with everyone. I try to go my own way, learn new stuff, explore myself. If i can find inspiration with an image generator sure why not, as long as i am keeping being authentic.

  1. In the 19th century there was the ongoing debate if the medium of photography can create art or not. Photographer Gustave Le Gray was one advocate to see the artistic qualities of photography. He wrote the ‘Practical Treatise On Photography on paper and on glass’ which contained also rules for posing models ( He hated the commercialization of photography and tried to focus on the artistic side of it. Despite his efforts though, in the 1890s photography was still not accepted as art. Museums rarely collected or showed photos. The same seems to happen now with image generators but there are already galleries that showing AI art (although not necessary generated by image generators). ↩︎

  2. I often saw the arguments “Photography is no art” because the painting is somehow more genuine. “If you randomly click a shutter button on a camera is it art?” Maybe? Depends on the outcome, intention, process and the viewer. “Photography is dying; there is so much smartphone garbage out there”, “All the snapshots are like a copy of a copy”. It feels like the complaints of many painters during the 19th century, who had previously made their earnings from painting family portraits. This seems to be happening again due to the image generators. ↩︎

  3. ↩︎

  4. In my opinion this is very legitimate concern and i can understand the reluctance of shops or publishers. DriveThruRPG is an interesting example here. DTRPG added a point into their AGBs that ‘All product listings that feature art generated by a third-party source such as Inkarnate or Dungeondraft, or an AI-generation tool such as ArtBreeder, Midjourney, etc. are required to utilize the appropriate identifying filter (found under “Creation Method” in the Format section of title filters). Titles containing any art rendered by AI-generated tools must also display the following disclaimer in their product description: This product contains assets that were procedurally generated with the aid of creative software(s) powered by machine learning. Titles that do not comply are subject to removal from the marketplace. Repeat offenders may have their publishing permissions revoked.’ And then suddenly that policy was removed after negative feedback. An odd reaction … ↩︎

  5. ↩︎

  6. Something like ‘Image Cloaking’: or ↩︎

  7. Even stock image sites have this problem. People need to make pics like ‘a men cleaning stuff’ for example so that the future training data defaults not to white female only when the prompt is “person cleaning stuff”. ↩︎

  8. Some interesting perspectives: ↩︎

  9. This is also an interesting question for a payment model for the artists. Albrechts Dürers work is public domain so royalities are not needed, but how do you determine what and who went into the cyberpunk and hacker flag? ↩︎

  10. ↩︎

  11. or ↩︎

  12. ↩︎

  13. See the paper ( and the source code ( from Daniela Ivanova. You can help training the AI here: ↩︎

  14. Sometimes the counter movement has interesting consequences: When Le Gray was hired by the French government to take photographs of the historical monuments of France, he indirectly contributed to the Impressionists movement. His pictures of the Gothic buildings created inspiration for Monet and his picture of Rouen Cathedral. ↩︎

  15. The taste of the masses is also not a good guideline. The most liked photo on Instagram is a picture of an egg. Not any fancy thought provoking piece… an egg. ↩︎