AI is forcing the hand of the Demoscene.
category: general [glöplog]
PS I don't want to blame the victim..it's obviously the problem with the law not protecting unaffiliated people work pretty much at all.
Open source and Public domain are not the same thing.
Creative commons and Public domain are not the same thing either.
Oh really? Tell this to Copilot creators: https://en.wikipedia.org/wiki/GitHub_Copilot#Licensing_controversy
From the link:
Quote:
"training machine learning models on publicly available data is considered fair use across the machine learning community"
Also related: https://sfconservancy.org/GiveUpGitHub/
Do you actually have a point, or...?
https://www.theregister.com/2023/07/10/in_brief_ai/
Gargaj: apart from the joy of discussing something with humans? :)
that's just your assumption
The only problem i have with github is that it's hosted in the usa.
Hosted in the USA.
Someday an AI called Boss Springsteen 2.0 will generate a song with that title.
Someday an AI called Boss Springsteen 2.0 will generate a song with that title.
What's the consensus here? Can I release an AI generated demo in a wild compo?
Don't ask. Do it.
Would it be better if GitHub was hosted in Russia? Or China?
Quote:
What's the consensus here? Can I release an AI generated demo in a wild compo?
Here is an opinion of a random stranger that has zero influence on party regulations.
In most cases, you should be only allowed to release something if you use a model trained on properly licensed data.
However, since demos are public domain and non-profit, it should be allowed to use any data, as long as you give proper credits - according to demoscene spirit.
Quote:
Would it be better if GitHub was hosted in Russia? Or China?
No, more probably in Europe.
Although this could have some impact: https://blog.opensource.org/what-is-the-cyber-resilience-act-and-why-its-important-for-open-source/
Quote:
The most important scene compo is floppy disk throwing.
can i bring a machine throwing said thing?
i don't know if i can bring anything new to the discussion. if we stay within the demoscene the consensus pretty much seems to be:
It's just a tool.
Use however you like.
(credit where credit is due, tho)
which i would agree with. show your craftsmanship using or breaking all tools you can handle :) it's about art, innit?
however, there were a lot of interesting comments that touched a broader spectrum of use cases. and, like every tool:
It's just a tool.
Do NOT use however you like.
much like a kitchen knife can be used to create delicious food, it can also be used to cause harm to another (human) being.
we are talking here about a pretty new tool at our disposal and we do not know the effects and ripple effects certain use cases of this tool might have. however, we can try to make educated guesses.
[welp, nobody in their right mind would call these tools 'A.I.', but for continuity's sake it might be useful to stay with the label]
So, uhm, i would like to look at current wide-range use cases of AI. namely those i would call "time eaters"; you know, the algorithm feeding you the next content. those are surprisingly good in maximizing "screen time" when engaging with a "common" human. They are designed for it, because that is the best way in our current ecological system to make bling bling out of visitors.
moar screen time -> moar ads -> moar bling bling
is this a good thing? what about content that would be enjoyed, but would end screen time after consuming? are we promoting couch potato content and are actively trying to drown activism (not in the political sense; just in general, aka "doing stuff")?
is this a problem?
and if so, what is the cause of this?
is it the tool? the greedy human? the socioeconomic context?
maybe we should write a law banning all advertisement, reasoning that a monetary incentive hinders the expression of free speech. or, in simple words, "don't give away bling bling to people so that they tell lies for you". Sure, that maybe could fix this "time eaters" problem, removing the coin flow out of the system.
..but, in that case, i would prolly need to switch to a subscription-based platform and can't leech screen time from all those fools without an adblocker. inconvenient, but somehow the energy cost of delivering content to me has to be paid for.
iirc, fb had a problem with pushing their users too far into depression and they forgot to implement in their reward function, that a suicide doesn't generate ad revenue.
their hot-fix wasn't to reevaluate their approach of revenue but instead train another AI to identify suicidal individuals.
I think it is safe to say, that not every use case of this tool we label as AI is ethical.
However, unlike @4gentE, i rly do not think that this "generating content" is going to be a big problem. It might be a useful tool to stitch together a fake movement, but you still need some human interaction/supervision. and i have a surprisingly high evaluation of the "general public" when it comes to identifying such things. there might be a real problem if we also neglect public education - but that is a whole nother topic :D
where i see a real danger is using this tool to intercept and censor information flow. i have read and heard about an idea trying to censor information without a human even knowing what it is that is censored. it was in the context of printing DNA that might be harmful to humanity. it was advocated that this information should be hidden and only a single human can add to this database of "dangerous information". no other supervision.
quoted from the link above.
this tool could be really good in achieving that goal - censorship of information where no human knows what it even is that is being censored.
but pretty much every single cell in me screams "that is the most fucked up and stupid idea ever!"
maybe i am one of the "doomsayers" here, but i really cannot imagine a possible reality where taking this step is beneficial for human development. i already think that assigning a value like a "social score" using this tool is highly dangerous.
..and.. in my honest opinion, i think the main problem actually is, that this tool is overhyped and people "want to use" this tool without understanding it.
you cannot get this understanding with money or power, but you can use money and power to entice other people to use such tools however you want.
however, like already said, this tool is pretty knew and we do not know what actually might happen.
i do think, that we might create a really big "upsie" before we develop a safe system of how to unleash this tool.
this might sound contradictory, but i do not think this tool actually creates any new problems. it merely points towards already existing societal problems. [with the added danger of cementing these problems for an undecidable amount of time]
@Salinga - not sure if troll or serious. if troll, just ignore me, was a good one :)
your statement holds true if and only if you can proof that all possible scenarios of everything that could theoretically happen has been part of the training data of your algorithm. basically feeding all of time that has been and will be into it.
It's just a tool.
Use however you like.
(credit where credit is due, tho)
which i would agree with. show your craftsmanship using or breaking all tools you can handle :) it's about art, innit?
however, there were a lot of interesting comments that touched a broader spectrum of use cases. and, like every tool:
It's just a tool.
Do NOT use however you like.
much like a kitchen knife can be used to create delicious food, it can also be used to cause harm to another (human) being.
we are talking here about a pretty new tool at our disposal and we do not know the effects and ripple effects certain use cases of this tool might have. however, we can try to make educated guesses.
[welp, nobody in their right mind would call these tools 'A.I.', but for continuity's sake it might be useful to stay with the label]
So, uhm, i would like to look at current wide-range use cases of AI. namely those i would call "time eaters"; you know, the algorithm feeding you the next content. those are surprisingly good in maximizing "screen time" when engaging with a "common" human. They are designed for it, because that is the best way in our current ecological system to make bling bling out of visitors.
moar screen time -> moar ads -> moar bling bling
is this a good thing? what about content that would be enjoyed, but would end screen time after consuming? are we promoting couch potato content and are actively trying to drown activism (not in the political sense; just in general, aka "doing stuff")?
is this a problem?
and if so, what is the cause of this?
is it the tool? the greedy human? the socioeconomic context?
maybe we should write a law banning all advertisement, reasoning that a monetary incentive hinders the expression of free speech. or, in simple words, "don't give away bling bling to people so that they tell lies for you". Sure, that maybe could fix this "time eaters" problem, removing the coin flow out of the system.
..but, in that case, i would prolly need to switch to a subscription-based platform and can't leech screen time from all those fools without an adblocker. inconvenient, but somehow the energy cost of delivering content to me has to be paid for.
iirc, fb had a problem with pushing their users too far into depression and they forgot to implement in their reward function, that a suicide doesn't generate ad revenue.
their hot-fix wasn't to reevaluate their approach of revenue but instead train another AI to identify suicidal individuals.
I think it is safe to say, that not every use case of this tool we label as AI is ethical.
However, unlike @4gentE, i rly do not think that this "generating content" is going to be a big problem. It might be a useful tool to stitch together a fake movement, but you still need some human interaction/supervision. and i have a surprisingly high evaluation of the "general public" when it comes to identifying such things. there might be a real problem if we also neglect public education - but that is a whole nother topic :D
where i see a real danger is using this tool to intercept and censor information flow. i have read and heard about an idea trying to censor information without a human even knowing what it is that is censored. it was in the context of printing DNA that might be harmful to humanity. it was advocated that this information should be hidden and only a single human can add to this database of "dangerous information". no other supervision.
Quote:
Preventing many thousands of individuals from gaining the knowledge [...] is one of the more critical challenges of our time.
quoted from the link above.
this tool could be really good in achieving that goal - censorship of information where no human knows what it even is that is being censored.
but pretty much every single cell in me screams "that is the most fucked up and stupid idea ever!"
maybe i am one of the "doomsayers" here, but i really cannot imagine a possible reality where taking this step is beneficial for human development. i already think that assigning a value like a "social score" using this tool is highly dangerous.
..and.. in my honest opinion, i think the main problem actually is, that this tool is overhyped and people "want to use" this tool without understanding it.
you cannot get this understanding with money or power, but you can use money and power to entice other people to use such tools however you want.
however, like already said, this tool is pretty knew and we do not know what actually might happen.
i do think, that we might create a really big "upsie" before we develop a safe system of how to unleash this tool.
this might sound contradictory, but i do not think this tool actually creates any new problems. it merely points towards already existing societal problems. [with the added danger of cementing these problems for an undecidable amount of time]
Quote:
GPT-AI is actually able to determine if an instruction directed at it is unethical and dismiss such instructions. All the programmers of the GPT-AI algorithm need to do: Use GPT-AI to evaluate the ethics of an instruction and if the evaluation results in it being unethical, the GPT-AI denies following it. The solution is already baked within the technology.
@Salinga - not sure if troll or serious. if troll, just ignore me, was a good one :)
your statement holds true if and only if you can proof that all possible scenarios of everything that could theoretically happen has been part of the training data of your algorithm. basically feeding all of time that has been and will be into it.
Hi folks,
while using MidJourney on my own images (I drop my art at MidJourney and see what he's able to generate from it), I started to document my process...
If you are interester, the document is here : hmiro.com/app/board/uXjVM28xxzI=/?share_link_id=350554921837
Use the mousewheel to zoom/unzoom and the right mouse button to pan (or your touchpad :))
while using MidJourney on my own images (I drop my art at MidJourney and see what he's able to generate from it), I started to document my process...
If you are interester, the document is here : hmiro.com/app/board/uXjVM28xxzI=/?share_link_id=350554921837
Use the mousewheel to zoom/unzoom and the right mouse button to pan (or your touchpad :))
WHAT THE HECK... wrong URL
sorry folks :
https://miro.com/app/board/uXjVM28xxzI=/?share_link_id=350554921837
sorry folks :
https://miro.com/app/board/uXjVM28xxzI=/?share_link_id=350554921837
Quote:
https://www.youtube.com/watch?v=XbZiJihKALQ
What a load of crap.