AI is forcing the hand of the Demoscene.
category: general [glöplog]
Right, right. In relation to Oppenheimer story, you guys are more like those scientists afraid of atmospheric ignition. Theoretically possible, but very unlikely.
That makes you Neville Chamberlain.
Speaking of taking things to a totally irrelevant sidetrack based on irrelevant ornamental detail in the original post, detail that was only colloquially mentioned by the original poster as a mere symbolical tool to make some other point.
Who is this “Elon Musk” person you refer to? Wait, is it that bizarre serial birthcanal ejaculator?
Who is this “Elon Musk” person you refer to? Wait, is it that bizarre serial birthcanal ejaculator?
Quote:
those scientists afraid of atmospheric ignition
Aha, you mean like those people who suggested there will be war when Putin amassed 100000 soldiers on the border. Lucky we had the cool types that laughed the prospect of war away.
Again, this (human to human) conversation we are having will be near impossible in the near future thanks to LLMs, that’s the main problem. LLMs will not kill like nuclear weapon, they will kill like diabetes, or climate change. I guess, after decades of hollywoodization, it’s no wonder some people fail to see beyond fireworks.
I forgot everything is so politicized nowadays.
So, what is the solution in your view? Should everyone just stop working on AI,
since there is non-zero probability that something really bad can come up out of it?
There is also non-zero probability that giant meteorite will hit the earth. Which ironically is the theme of some Conspiracy's 64k intro(s), isn't it?
I would of course feel bad if I would be the one downplaying the risk, and to my surprise, next day, the meteorite actually comes.
So, what is the solution in your view? Should everyone just stop working on AI,
since there is non-zero probability that something really bad can come up out of it?
There is also non-zero probability that giant meteorite will hit the earth. Which ironically is the theme of some Conspiracy's 64k intro(s), isn't it?
I would of course feel bad if I would be the one downplaying the risk, and to my surprise, next day, the meteorite actually comes.
Damn those people, those lowly luddites that tried to excercise caution, those people that tried to stop nuclear weapons from being used, right?
I don’t see how is standing up to tech bro venture capitalist nonsense a sign of being “politicized”? Well, think of it, maybe it is, maybe doomsaying is exactly what is being used to create more fund attracting hype.
I don’t see how is standing up to tech bro venture capitalist nonsense a sign of being “politicized”? Well, think of it, maybe it is, maybe doomsaying is exactly what is being used to create more fund attracting hype.
…hich would then make me a useful idiot, not somebody else.
Well, as you suggested, it would be best if I just shut up and go along with the program.
Well, as you suggested, it would be best if I just shut up and go along with the program.
LLMs as of today are based on transformers.
There are numerous papers showing severe limitations of transformers.
For example https://arxiv.org/abs/1906.06755 demonstrates that they "cannot model periodic finite-state languages, nor hierarchical structure, unless the number of layers or heads increases with input length".
This result gives you a clue that it's not the tech that is so advanced, but rather the language we use is quite primitive, so even a simple, weak model like this can fool us to think it's any intelligent.
That's exactly the case. So you are following the program already.
There are numerous papers showing severe limitations of transformers.
For example https://arxiv.org/abs/1906.06755 demonstrates that they "cannot model periodic finite-state languages, nor hierarchical structure, unless the number of layers or heads increases with input length".
This result gives you a clue that it's not the tech that is so advanced, but rather the language we use is quite primitive, so even a simple, weak model like this can fool us to think it's any intelligent.
Quote:
maybe doomsaying is exactly what is being used to create more fund attracting hype.
That's exactly the case. So you are following the program already.
Therefore the debate is leaning to the philosophical rather than the practical part. Will AI in it's future incarnation be of service to all the people; or will it be the exclusivity of some 'global elites club' who will live beyond any reach in their floating cities while the rest of the world will be left to a state of hunger games?
Of course plain logic and history (and movies) would point to the second; on the other hand humanity has reached whatever it is we are sitting at only because of technology. And the ultimate goal of human beings #should# be that prosperity is achieved through technology, else having a group of people doing the hard work so other group of people may benefit which has pretty much been the story of humanity up to this point will never change.
Utopian and Dystopian ideas come very close, it's easy to blur and get full of "yeah, but"s. Is Communism utopian or dystopian?
Of course plain logic and history (and movies) would point to the second; on the other hand humanity has reached whatever it is we are sitting at only because of technology. And the ultimate goal of human beings #should# be that prosperity is achieved through technology, else having a group of people doing the hard work so other group of people may benefit which has pretty much been the story of humanity up to this point will never change.
Utopian and Dystopian ideas come very close, it's easy to blur and get full of "yeah, but"s. Is Communism utopian or dystopian?
Quote:
people doing the hard work so other group of people may benefit
rexbeng: I completely agree with you here. It is and it will be like this, but it's unrelated to AI.
The real problem is collecting the data and who owns the data: code, art, your family photos, your music samples etc...
AI is just a red herring and even more so, a very convenient excuse.
It's an excuse, because if you convince general public that AI is creative or self-aware, you can claim copyrights to derivatives. Simple as that.
And even more, you can claim "AI" will help humanity, so even more people will give you their data in a good faith.
But in reality, "AI" so far are just weak predictive models that anyone could create on his laptop, but without data they are useless.
About doomsaying: Elon Musk was the main investor in OpenAI and was among the first to spread doomsday scenarios. Now of course, the group of doomsayers is much larger and it became a fashionable thing to do - perfect for all attention whores out there. So they just carry on and yes, drive the hype, create more funds around AI and are very conveniently detracting everyone from the actual problem.
Yeah, we’re way off topic. But I’ll just say this.
@tomkh - I see you haven’t noticed that I’m not saying that LLMs are ubersuperior soon-to-become sentient AI tech and that makes them dangerous. I’m saying exactly the opposite all the time - they are bullshit overhyped tech. Which makes them all the more dangerous. They are resource hungry and inefficient, and tech bros will totally unnecessarily shove them into everything. Mainly for tasks easily achievable with way less complex and way greener tech. But they can and will be used for superproducing and superspreading misinformation. They don’t have to be ‘great’ they only need to be ‘good enough’. Enough people believe in flat earth already as it is. Learning about it from semi-literate texts which will now become way more literate.
But this is the problem I described earlier. Every 50 posts one has to repeat himself. Not very enjoyable anymore.
Oh I have another one. Damn those luddite doomsaying Cheyenne medicine men that tried to stop progress by suggesting whiskey was a bad idea.
Wait, wait, I have another one. Damn that Morpheus and his red pill. ;-)
@tomkh - I see you haven’t noticed that I’m not saying that LLMs are ubersuperior soon-to-become sentient AI tech and that makes them dangerous. I’m saying exactly the opposite all the time - they are bullshit overhyped tech. Which makes them all the more dangerous. They are resource hungry and inefficient, and tech bros will totally unnecessarily shove them into everything. Mainly for tasks easily achievable with way less complex and way greener tech. But they can and will be used for superproducing and superspreading misinformation. They don’t have to be ‘great’ they only need to be ‘good enough’. Enough people believe in flat earth already as it is. Learning about it from semi-literate texts which will now become way more literate.
But this is the problem I described earlier. Every 50 posts one has to repeat himself. Not very enjoyable anymore.
Oh I have another one. Damn those luddite doomsaying Cheyenne medicine men that tried to stop progress by suggesting whiskey was a bad idea.
Wait, wait, I have another one. Damn that Morpheus and his red pill. ;-)
@rexbang : I cannot even begin to pretend to know what you mean by “communism”. The idea?
Quote:
perfect for all attention whores out there
4gentE: not sure if you noticed this.
Quote:
Not very enjoyable anymore.
Oh but you do seem to enjoy ;)
@tomkh, yes, that's why I wrote about a hypothetical future incarnation of AI.
The interesting part, for me at least, is that shift towards 'digitalization' that had been breezily happening without really bothering anyone until this AI was brought to the game and flipped tables over. I mean, we are talking about data and NFTs and digitally created (however easily or laborious) and spread content and all the jazz, assets that are not connected to the physical world, unless you define them by 'time' and 'energy' although those are themselves arbitrary... So, things you cannot actually own, hold and have; rather you are just 'having rights' to them. And you may as well be disconnected if physical technology (cables, Internets, electricities...) fails for whatever reason and there's nothing you can do about that (and I'm not even thinking about the very digital conditions that may change so fast that are way beyond a man's ability to react to them; say NFT collapses).
And yet this intangible nothingness creates so much controversy.
Yes, the talk has gone offtopic, but I gather there's not much else to say in the context of the demoscene; other than give ideas to make demos about it. :)
@4gentE: damn that Prometheus dude, I say!
The interesting part, for me at least, is that shift towards 'digitalization' that had been breezily happening without really bothering anyone until this AI was brought to the game and flipped tables over. I mean, we are talking about data and NFTs and digitally created (however easily or laborious) and spread content and all the jazz, assets that are not connected to the physical world, unless you define them by 'time' and 'energy' although those are themselves arbitrary... So, things you cannot actually own, hold and have; rather you are just 'having rights' to them. And you may as well be disconnected if physical technology (cables, Internets, electricities...) fails for whatever reason and there's nothing you can do about that (and I'm not even thinking about the very digital conditions that may change so fast that are way beyond a man's ability to react to them; say NFT collapses).
And yet this intangible nothingness creates so much controversy.
Yes, the talk has gone offtopic, but I gather there's not much else to say in the context of the demoscene; other than give ideas to make demos about it. :)
@4gentE: damn that Prometheus dude, I say!
@4gentE: I am referring to the social/economic basis.
Quote:
(on communism)I am referring to the social/economic basis.
I don’t see anything controversial about it.
Now, I’ll try to push the conversation (at least remotly) back to topic.
Of course the results of messing with a complex socioeconomic system are unpredictable.
The idea of communism is to create a better, more efficient, more humane distribution of wealth/goods/means of production. The “disruptiveness” that ensues when trying to put it into practice is not the goal, it’s collateral.
The idea behind LLMs is to gather a lot of investors money by hyping the already present “disruptiveness” of this tech. The potential gain to the general population is not the goal, it’s collateral.
See the difference?
Ofcourse, but when one brings communism to a conversation is doomed to get a lot of "cannot function in the real world / see how it turned out", so there's the controversial part.
It's hard to not see the difference in what you describe, as it's pure capitalism trying to monetize. And pure capitalism is our reality, so 'yet another one mass/trash/cash venture' is no surprise. And I realize as with other ventures, there's not much I can do about AI, other than be curious to discover if/how to use it in my field of interest. I don't know if AI is going to grow and then suddenly blow to a 'nobody cares about' state like other big-money-making tech like 'smart', 'virtual', 'augmented'. But it's one of those rare situations that a technology sparks my curiosity about the theoretical how it might take things forward or not.
Based on my personal knowledge in my field of interest, my opinion is that at it's current state AI is a tool that may be useful. With regards to the more general (or global) question "should AI be stooped and be done with right now", if it was within my power I honestly wouldn't know which button to push.
But I'm a utopianist, I think.
It's hard to not see the difference in what you describe, as it's pure capitalism trying to monetize. And pure capitalism is our reality, so 'yet another one mass/trash/cash venture' is no surprise. And I realize as with other ventures, there's not much I can do about AI, other than be curious to discover if/how to use it in my field of interest. I don't know if AI is going to grow and then suddenly blow to a 'nobody cares about' state like other big-money-making tech like 'smart', 'virtual', 'augmented'. But it's one of those rare situations that a technology sparks my curiosity about the theoretical how it might take things forward or not.
Based on my personal knowledge in my field of interest, my opinion is that at it's current state AI is a tool that may be useful. With regards to the more general (or global) question "should AI be stooped and be done with right now", if it was within my power I honestly wouldn't know which button to push.
But I'm a utopianist, I think.
Consider this (it’s my construct, but endulge me please).
The youth that gathered in Seattle in 1999 and Genoa in 2001 to protest were the children of web1.0. The youth of today gathered around alt-right vs. woke “culture wars” are children of web2.0 + algorithmic targeting. Imagine youth of tomorrow as children of near future web propelled by LLMs. It’s a race to the bottom. No, LLMs can’t be stopped, but I’m sure as hell not going to celebrate them nor am I going to pretend this tech has any chance of being ambivalent deployed at this particular point in history and in this manner.
Please don’t get insulted, it’s not my intention to insult, but “it sparks my curiosity” when faced with if nothing else further damage to climate, further damage to other people’s livelihoods is pretty petty, ignorant and selfish stance IMHO.
The youth that gathered in Seattle in 1999 and Genoa in 2001 to protest were the children of web1.0. The youth of today gathered around alt-right vs. woke “culture wars” are children of web2.0 + algorithmic targeting. Imagine youth of tomorrow as children of near future web propelled by LLMs. It’s a race to the bottom. No, LLMs can’t be stopped, but I’m sure as hell not going to celebrate them nor am I going to pretend this tech has any chance of being ambivalent deployed at this particular point in history and in this manner.
Please don’t get insulted, it’s not my intention to insult, but “it sparks my curiosity” when faced with if nothing else further damage to climate, further damage to other people’s livelihoods is pretty petty, ignorant and selfish stance IMHO.
LLMs are the least thing to worry about. Shit was happened many times before, and the mechanism here is almost the same. Do not blame the internet. LLMs can be good actually.
This is my own quote from Page 4 of this thread. Rexbeng, I’m sure you remember. This explains why you won’t get me applauding this thing, even if I consider only the “good” part, and ignore everything else. Honestly, when I consider it all, the only field where I think (or thought) LLMs could be of great help and the good will outweight the bad is medical diagnostics. General medical diagnostics are in poor state in many countries. With LLMs we could potentially get a shitload of dr. Houses along with his whole team. In each and every primary care unit. In fact, I was pretty sure of this until i read this:https://www.fastcompany.com/90932968/martin-shkreli-dr-gupta-sasha-luccioni
So here’s the quote:
So here’s the quote:
Quote:
Let me just share one more (almost completely subject unrelated) story that relates to continuity of technological ‘progress’ which you mention, in a field i know very well. There’s no speculation here, no projection, no interpolation, it’s all been seen and lived thru. It goes something like this:
Local graphic design achieved its heyday (quality wise) in the 80s. That was before computers. Then (in simplified version) computers came. In design, there’s this Big Monster that is called The Client. Different Big Monsters come with different levels of ability and willingness to sabotage good design work. But generally speaking, best work is done when the monster is kept well at bay. As a rule. When computers came, the quality of graphic design plummeted. Wanna know why? Because everybody was doing it. Because all of a sudden Big Monsters ‘little talented nephews’ had computers and would readily churn up designs that adhered to Big Monsters poor taste. Also, some bad designers with poor ethics joined in, let their hands be guided by the Big Monster, and took over the good designers gigs. The result was worthy of crying. The proffession regressed 30 years in 5-6 years time. Something similar happens with journalism today I think. Anyway. With time this situation got better. Universities finally reacted, modernised their curriculums. So that now, 30 years after this ‘great de-evolution’ of graphic design we came back to the level (quality wise) at which we were in the 80s. Every designer uses a computer and a bunch of software these days. A lot of resources, a lot of money and time spent to achieve the same result. Maybe even still a little bit inferior compared to the heyday. So what I ask you all is this: do we need to go thru that all ordeal again? Do we absolutely have to see new barbarians tear down everything and wait for them to become educated? So that in 30 years time we can climb back to where we are now? Can’t we just move on without ‘moving fast and breaking stuff’?
@4gentE: No worries, no offense taken. I understand how it's easy to give/get ideas when engaging in depersonalized conversations with people you don't know over the internet. But please consider that, in some different topic, were your feelings and sensitivities are different towards the subject, it's quite probable you ended up being called for being 'petty, ignorant and selfish'.
The hard truth is that we do damage to climate with pretty much everything we do. I am doing further damage by using a computer to connect to the internet and write this post. The world is what it is and I stand by what I said. AI as I currently understand it, even with LLMs being nothing more than what has been described in this thread, could be applied to various fields with the potential to run tasks that are currently time consuming and laborious and could benefit from the speed and the capacity. Among many things, think of the justice system, science research, various areas of planning/building/engineering, logistics, proof read, trace bugs in code. There's potential benefits in psychology which is a very interesting field for reasons you may think. And the goal would be, not to replace experts in the fields, but to assist.
What you are describing in your self-quote is a problem of people in the western/liberal/capitalistic system, not the technology. The technology just functions or not. Hollywood writers are fighting to get AI regulated, not abolished.
Finally, please indulge me as well when I'm making constructs of my own: Say we are back to this where I had the power to shut things by the pushing of buttons. I would be very happy to push a lot that IMHO damage the climate way too much without even being at all useful or constructive, and also they turn people to ignorant idiots. Doing so, it would cost the many people who are in those things' fields their jobs, many people their leisure, maybe even 'way to express themselves' or 'fun' or 'meaning of life' to take things to the extreme. I would keep AI.
The hard truth is that we do damage to climate with pretty much everything we do. I am doing further damage by using a computer to connect to the internet and write this post. The world is what it is and I stand by what I said. AI as I currently understand it, even with LLMs being nothing more than what has been described in this thread, could be applied to various fields with the potential to run tasks that are currently time consuming and laborious and could benefit from the speed and the capacity. Among many things, think of the justice system, science research, various areas of planning/building/engineering, logistics, proof read, trace bugs in code. There's potential benefits in psychology which is a very interesting field for reasons you may think. And the goal would be, not to replace experts in the fields, but to assist.
What you are describing in your self-quote is a problem of people in the western/liberal/capitalistic system, not the technology. The technology just functions or not. Hollywood writers are fighting to get AI regulated, not abolished.
Finally, please indulge me as well when I'm making constructs of my own: Say we are back to this where I had the power to shut things by the pushing of buttons. I would be very happy to push a lot that IMHO damage the climate way too much without even being at all useful or constructive, and also they turn people to ignorant idiots. Doing so, it would cost the many people who are in those things' fields their jobs, many people their leisure, maybe even 'way to express themselves' or 'fun' or 'meaning of life' to take things to the extreme. I would keep AI.
@rexbeng/PA :
I’m very glad you did’t take offense. I always fear that when conversation is being performed thru writing only.
I wonder why you didn’t make an example of ‘humans breathing’. Breathing also produces CO2.
1) There is, you know, a measure. Metrics. Damage to climate is not a binary variable with only 2 states as in 0 vs 1. I can’t understand why you constantly and deliberately play this down. Gargaj pointed out on several occasions that the power needed to train / operate a LLM is not even in the same ballpark as your example (computer connected to internet).
2) These things are, you know, cumulative. So, if you state the obvious fact that computers and internet already damage climate, it does certainly not mean that therefore we shouldn’t worry about LLMs climate impact. You see, not only is LLMs impact huge, but it also adds to the already existant impact. Surely you can see this, but for some reason wish to ignore it. So this “argument” is pure whataboutism.
I’m very glad you did’t take offense. I always fear that when conversation is being performed thru writing only.
Quote:
The hard truth is that we do damage to climate with pretty much everything we do. I am doing further damage by using a computer to connect to the internet and write this post.
I wonder why you didn’t make an example of ‘humans breathing’. Breathing also produces CO2.
1) There is, you know, a measure. Metrics. Damage to climate is not a binary variable with only 2 states as in 0 vs 1. I can’t understand why you constantly and deliberately play this down. Gargaj pointed out on several occasions that the power needed to train / operate a LLM is not even in the same ballpark as your example (computer connected to internet).
2) These things are, you know, cumulative. So, if you state the obvious fact that computers and internet already damage climate, it does certainly not mean that therefore we shouldn’t worry about LLMs climate impact. You see, not only is LLMs impact huge, but it also adds to the already existant impact. Surely you can see this, but for some reason wish to ignore it. So this “argument” is pure whataboutism.
“Many people do not understand just how much energy AI consumes. According to TechTarget, the total consumption of one model over nine days was no less than 27,648-kilowatt hours (kWh)Opens a new window . This staggering number is more than the amount of energy that three households use in an entire year – and it only took one program a little more than a week to consume that much energy.”
And all that energy wasted so that I can pretend I’m an artist in front of my internet buddies and slap 6 fingered abominations onto my Deviantart page. Great! Applause everyone! What are those measly subsaharan lives in comparison to my need for an applause, right?
My point isn't to make comparison of energy consumption(s). My point is that every activity connected to our qualities and quantities of life is consuming energy. And the energy consumption of those QQs is only going to grow. Should we pause technology from now on? Perhaps we should. And should we further come up with regulations to force an 'un-developement' on ourselves? Perhaps we should also.
But banning tech because of 'pretenders and their internet buddies' isn't the way to go imho. Govs should impose proper regulations for fair use; and fair use should even cover and separate the justified use of tech from the unnecessary one (and we already got a lot of the later).
Again! I will not get into doing the math, but I'd prefer a globally enforced 'go back to retro phones' policy than banning AI f.e.
But banning tech because of 'pretenders and their internet buddies' isn't the way to go imho. Govs should impose proper regulations for fair use; and fair use should even cover and separate the justified use of tech from the unnecessary one (and we already got a lot of the later).
Again! I will not get into doing the math, but I'd prefer a globally enforced 'go back to retro phones' policy than banning AI f.e.