Future Crew - Second Reality (2023 AI Version)
category: general [glöplog]
Feel free to check youtube for : watch?v=7rJ0dJFf6JI
drugged reality
1) posting links isn't that hard
2) no
2) no
What a way to desecrate Future Crew's legacy. Why?
lynn: i thought the same at first, but then recalled there is a limitation to not allow new accounts to post links straight away.
No
@FCAI : well, at least, someone had to try...
I couldn't watch for the entire video but it shows something: the tools we have at hand when "using" an AI-powered image generator aren't remotely mature.
I looks like you've been struggling with the AI to control the generated content of the image, the subject, the way it moves, the camera motions ... and none of them worked :(
I don't blame you, I think it's one of the most problematic limitations at this point...
A GAN is supposed to be deterministic but most of the AI provided will inject a huge dose of random, so you never exactly now what to expect.
Maybe the combination of Stable Diffusion + Control Net will give something ? Maybe it's already what you tried (I'm not expert).
Using a state-of-the art demo as a reference is a good exercise. It's not an act of creation, but I'm fine with that ... It's more like an effort of testing the AI tools and trying to reach its limitations.
Why not ? :)
I couldn't watch for the entire video but it shows something: the tools we have at hand when "using" an AI-powered image generator aren't remotely mature.
I looks like you've been struggling with the AI to control the generated content of the image, the subject, the way it moves, the camera motions ... and none of them worked :(
I don't blame you, I think it's one of the most problematic limitations at this point...
A GAN is supposed to be deterministic but most of the AI provided will inject a huge dose of random, so you never exactly now what to expect.
Maybe the combination of Stable Diffusion + Control Net will give something ? Maybe it's already what you tried (I'm not expert).
Using a state-of-the art demo as a reference is a good exercise. It's not an act of creation, but I'm fine with that ... It's more like an effort of testing the AI tools and trying to reach its limitations.
Why not ? :)
"so you never exactly now what to expect" -> " so you never exactly know what to expect".
Sorry.
Sorry.
Disturbing stuff......why is it always so creepy? Their gonna torture us for sure when they take over. Better hurry to drink my self to death...høhø
More intersting in its creepyness than most 2nd reality deadhorsewhipping.
ö_O?...
This is horrible in every way conceivable, thanks.
Quote:
1) posting links isn't that hard
2) no
There's a spam protection filter that requires you to have at least 1 glöp.
hey this is cheating, it's second reality plus unreal and some other mini demos
still better than a 20 minutes c64 demo
Quote:
@FCAI : well, at least, someone had to try...
I couldn't watch for the entire video but it shows something: the tools we have at hand when "using" an AI-powered image generator aren't remotely mature.
I looks like you've been struggling with the AI to control the generated content of the image, the subject, the way it moves, the camera motions ... and none of them worked :(
I don't blame you, I think it's one of the most problematic limitations at this point...
A GAN is supposed to be deterministic but most of the AI provided will inject a huge dose of random, so you never exactly now what to expect.
Maybe the combination of Stable Diffusion + Control Net will give something ? Maybe it's already what you tried (I'm not expert).
Using a state-of-the art demo as a reference is a good exercise. It's not an act of creation, but I'm fine with that ... It's more like an effort of testing the AI tools and trying to reach its limitations.
Why not ? :)
Thank you for the kind and honest words. Im indeed no AI expert either and this was my very 1st experiment based on AI. Ever. Hence this random (disturbing) result. I fully agree on that. It also was a great way to start to learn about these (creepy) tools and their limitations.
I werent even expecting comments like "Woah this is awesome". I mainly used an online platform called runwayml. No stable diffusion, no control net. But I enjoyed making and watching it. And if I would not have shared it here, I would have never known, how the members of pouet would think about it. So, thank you everybody !
As you already wrote, its an interesting way to learn about this stuff. Especially if you have a bottle of red whine next to you and start thinking "Hmmm, what kind of images could I start to mess with, on my very 1st AI adventure..."
Sorry ! ;)
Interesting and entertaining.
I also did some experiments with RunwayML and turned some my most creepy xenomorph aiart-still-images into a small movie, with elevenlabs for audio, see it here.
I also did some experiments with RunwayML and turned some my most creepy xenomorph aiart-still-images into a small movie, with elevenlabs for audio, see it here.
Indeed. Especially when humans or animals "become alive", or additional scenery gets added to camera movements. You will recieve everything from "woah" to "wtf" moments. Like at the end of your clip. wtf-creepy. :)
But thats what I liked about it. You'll never know what happens next, at least when there arent dozens of buttons, switches and camera controls.
Stable Diffusion together with Control Net, seems to offer so much more, not sure where we are heading with this, but interesting times for sure...I hope... ^^
But thats what I liked about it. You'll never know what happens next, at least when there arent dozens of buttons, switches and camera controls.
Stable Diffusion together with Control Net, seems to offer so much more, not sure where we are heading with this, but interesting times for sure...I hope... ^^
in fact, some scenes are very funny. I liked the animated face of the ghoul from Panic.
There are several in-depth youtube videos explaining runwayml. In one of them an uploaded screenshot of the famous "Pulp Fiction" dance-scene for example, ended up as a video, beeing very close to the original movements of this iconic dance scene.
Was the music AI-remixed too?
The music is just a pitched down version of 2nd Reality, roughly cut and edited within video vegas, adding some extra samples and thunderstorm here and there.
That was actually pretty intense. The music complements nicely this fever dream. Speaking of dreams, if you have not had enough yet, I'd gladly watch similar AI version of Desert Dream as well.
Quote:
The music is just a pitched down version ...
Works surprisingly well.
Not only does this completely rape the work of the artists who did the original.
It also expresses nothing new to the conversation, where we stand today: AI is not art, it's imitation.
And the imitation in this is just horrid.
It also expresses nothing new to the conversation, where we stand today: AI is not art, it's imitation.
And the imitation in this is just horrid.