It's good to see some pushback happening on AI, there are so many dangers. Being based on stolen intellectual property, where is our government putting on the brakes? Ha, ha. AI's very information set is likely skewed, white, western and wealthy, not to be trusted. Real truth requires digging. Trusting a bot is a step way too far. It's also another form of job loss, obviously, and its energy and water use is anathema in an overheated world that doesn't understand we have exceeded the limits of growth. I wrote about energy use and other issues here for those interested. It's boggling.https://geoffreydeihl.substack.com/p/artificial-intelligence-is-hot
The potential energy use and need for additional infrastructure is ghastly. I mainly write about the climate crisis, but AI is a subject I may return to at some point. Unfortunately, there are no protests taking place near me.
This is thoughtfully written and articulated. I love it. Thank you for sharing with the world!
The AI hype, craze, and cool-aid is so real!
I've been pondering on this thought and question for the last 12+ months. It leaves my friends and peers, and even myself still pondering...
"In society, we have a code of conduct and ethics that guide us. We know that lying, stealing, cheating, and killing are wrong regardless of whether we are religious or not. This moral compass guides our interactions and keeps things checked and balanced. But what is keeping the creation and evolution of AI and technology in check?"
Having been already through one AI wave that came and went, I feel positive that this one will pass too. Will there be people that will suffer because of this wave? Yes, unfortunately there will be.
After this wave passes, there will be another one, and another one. I think it is human nature to seek to make a higher being.
I am not advocating to do nothing, and seeing this wave through more mature eyes than the one I went through in my 20s, the non-facts are clearer to me.
One thing that has crossed my mind many times these last few months, is that I am not sure that everyone that is doing jobs that may be temporarily replaced by today's "AI" would want to keep doing them.
I actually want to ask each and every person that has being laid off because of "optimization" or because "this is what everyone is doing", if they want to continue doing what they were doing just before. Would they want to do anything else?
There are financial considerations in my professional decisions, and I am trying to approach a place where I am executing on my passions for a living.
I am starting to put together a solution for people to pursue their passions AND make a living. It is being done one project at a time, with one or more people at a time.
I encourage everyone to try to pursue their passions AND support others in doing the same.
I am pro-AI, but like everything, it requires common sense and foresight. Every day I see people posting benevolent innovations that represent forward progress. But every day I also see stupid things that people created “because they can” not because they SHOULD
And you know this isn't a pro-AI vs anti-AI debate. The technology is stunning and holds tremendous potential. It's just being hijacked by misguided business models that have not been thought through very well.
I love the idea of an AI slowdown of some sort, but I fear that we're simply stuck in a prisoner's dilemma, a powerful flywheel reinforced by money ad nauseam.
This is certainly a challenge but we should note that we have agency as people if we take it grassroots and act in unison. We do have a huge amount of the public in support, at almost 45% being very worried. Even a few thousand people protesting should get our representatives to at least comment and talk to us.
This is why we have PauseAI - join us, and see what we can do together.
I think everyone here knows what AI is (except me). Because in the article and in the comments you say this AI will kill human creativity, but how? Because it can create better, because it can create faster?
As far as I understand AI is data, maybe all the digital data from all the Internet sources. Where is it, under that mountain in Utah? It must also include all the Google, Apple, Microsoft, Meta and Amazon profiling that has been collected on every one of us for the last 25 years. Therefore it is all the repeat of yesterday. Then AI is speed. It can run through all that data fast enough that you won't get bored waiting.
Then it gathers together something like a trend or a tendency related to what you asked for. Everything on the Internet doesn't point toward one direction. so it has to throw out all of what it is programmed to reject. That's anything counter to the current narrative, and then out comes a politically correct answer.
Yann Lecun runs AI for Meta. He said that there will never be an unbiased AI, LLM. The theory is that there will be 100's of AI providers, (Large Language Models), so just go and check with another platform for another version of reality. His vision is that everyone will have their personal AI assistant. Kind of like a hyper Siri or Alexa. You will remember very little about life, because you will ask the assistant for every little thing, and it will do it better that you could possibly come close to. What is my shopping list for today, oh I'm too tired to go, call for delivery. What's the charge on my electric car? Oh I plugged that in last night Sir, it is 100%. Did you feed the cat? Of course Sir, she ate 94 grams and drank 14 milliliters of water. Her heartbeat and blood pressure are normal. She will rest for an hour, then I will pet her for you, 12 strokes.
The real boon of AI is that all of the surveillance of your person will not be necessary. That's because you will be trained to read into your data file every minute movement or thought in your life. You're already doing it through your emails and your smart phone, so no big deal. Just a little more efficient.
All of this simply seems to reinforce that humans were continue to become more invalidated, which is all in all, a bad outcome to put it lightly. And yes, eliminating the value for human creation effectively destroys a lot of the incentive to create; along with destroying the ability to connect by no longer being able to validate if the person that you are speaking to online is a person.
Yann has been consistently wrong but does not want to admit to anything, unfortunately.
A prediction that was wrong at least 8 months before he made the statement:
Which completely falls apart when you actually look at the useful knowledge an LLM has. If showed GPT-4o a picture of a town in italy, it can able to tell you the name of the town, its history, it's food specialties, its most touristic spots, the plants and animals in the area. A 4 year old would not.
I have no faith in Yann Lecun as a teller of truth. His position at Meta means that he is the chief for spreading their rhetoric. So without checking the source, (I have little interest), I would say that his "wrongness" is what Meta wants us to believe.
In an interview he was taken off-guard by the question: what is the business model, how does it pay for itself? That is when he coughed and flinched and mumbled something about the advertising model. Total Bull, Big Tech only makes money on surveillance.
When referring to a four-year-old, he is referring to "Kinetic Intelligence". That is balance and orientation in the physical world without any language or words. He claims an LLM doesn't have that, (and says, never will). Never is a long time.
This feels like Luddites who smashed mechanical machines because they feared jobs being taken away from human labor. AI is a tool. Nothing more. If an AI breaks the law the owner of the AI should be held accountable. Humans have a bad habit of blaming "things" vs. blaming humans who created faulty things or humans who use things to harm others.
It's way too easy to blame "things" rather than face the difficult challenges of how to deal with faulty human decisions and atypical mental states. Sadly, in the USA we seem to have punted on humans with mental issues and have decided that the streets are where we want people with mental challenges to live. I'm very doubtful that trying to survive on the street is a meaningful solution, let alone beneficial to those whose mental state poses challenges for their survival in modern society.
I'm also doubtful that pausing AI will result in any meaningful benefit. The likely result is that authoritarian governments will use the pause to advance their own AI efforts and leave the "free"/democratic world defenseless to new AI technology.
AI is not just a tool, but essentially some vision of an alternate digital species and thus merits significantly more concern; but even from the purely humanistic angle, it is being introduced into our world by the same people who seem to be pretty heedless of the harm they are causing and thus it does not seem like this will improve as they further disempower us.
As for other nations, this is why coordination is a strategy. As with nuclear weapons, despite major ideological differences, most of humanity can agree that we don't want to die and we don't want to be disempowered.
Today's "AI" is based on statistical analysis to determine a next likely step using statistical feedback to predict a next step in a problem. Today's systems are super limited and require massive amounts of compute power to train to solve particular problems. The output of these models feels like reasoning to some. But is it?
Assuming this strategy is the right one to achieve AGI, we are likely decades away from having the compute power to do so. But it's also very possible that today's AI strategies are barking up the wrong tree with regards to solving for AGI and new ways of approaching the problem will need to be identified.
In the meantime, today's AI definitely has uses. Uses that include national defense. I'd prefer to see my own nation stay ahead of the curve to the extent that it helps defend against other nations who might use AI to do us harm. It could be as simple as defending against an enemy's autonomous drone swarms or something that goes after key infrastructure via software means. I see those types of scenarios as bigger threats in the short to mid-term timeframe.
It isn't appropriate to say that they only use a "next likely step" but in fact, they appear to utilize concepual spaces to develop something akin to "truth". Whatever it is, it works very well.
I don't know if they are AGI or not(a concept itself questionable) but "rushing to stay ahead" is likely to end with a race to the bottom where we all lose. Again, the nuclear weapon analogy applies here, but this is also true of a variety of other planet-destroying capabilities. As Emmet Shear(interim CEO of OpenAI) said - saying that you want "AI to stay ahead" is a lot like saying:
"I need to build the anti-matter bomb to kill the planet first, because then someone else might kill the planet first."
The correct strategy for the planet and the humans on it, is to prevent the anti-matter bomb from being used to kill the planet.
Additionally, if you want AI for national defense, then that suggests that releasing it openly so that your adversaries will just be able to take your weights and use them to rapidly accelerate their own progress is a losing strategy. Then you should go with the Sam Hammond strategy and treat it as a national priority, with attendant security concerns, rather than let companies run wild with it.
I've been repeating this for some time, but ultimately, it really helps to realize that AI is not like other technologies in general, and is much more akin to "summoning the demon" as Elon said.
Adjust accordingly, for our sake and the sake of your children. The easy and happy idea is "It is just a spicy autocomplete on steroids that cannot reason!" but unfortunately, I really wish that this was true but...it isn't. Here's another example, debunking a previous, preciously held idea:
"n fact, I thought it forever beyond the reach of artificial intelligence. I commended Erik Larson’s book The Myth of Artificial Intelligence for making what in 2021 seemed like an ironclad case that artificial intelligence research was stymied in trying to model and implement inference to the best explanation, or what has also been called abduction (in contrast to deduction and induction).
In the last two days, however, all that has changed for me."
It's good to see some pushback happening on AI, there are so many dangers. Being based on stolen intellectual property, where is our government putting on the brakes? Ha, ha. AI's very information set is likely skewed, white, western and wealthy, not to be trusted. Real truth requires digging. Trusting a bot is a step way too far. It's also another form of job loss, obviously, and its energy and water use is anathema in an overheated world that doesn't understand we have exceeded the limits of growth. I wrote about energy use and other issues here for those interested. It's boggling.https://geoffreydeihl.substack.com/p/artificial-intelligence-is-hot
It'll be great to see you in PauseAI - I think that there's a lot to be said for energy use by training systems.
The potential energy use and need for additional infrastructure is ghastly. I mainly write about the climate crisis, but AI is a subject I may return to at some point. Unfortunately, there are no protests taking place near me.
This is thoughtfully written and articulated. I love it. Thank you for sharing with the world!
The AI hype, craze, and cool-aid is so real!
I've been pondering on this thought and question for the last 12+ months. It leaves my friends and peers, and even myself still pondering...
"In society, we have a code of conduct and ethics that guide us. We know that lying, stealing, cheating, and killing are wrong regardless of whether we are religious or not. This moral compass guides our interactions and keeps things checked and balanced. But what is keeping the creation and evolution of AI and technology in check?"
Thanks Sean for your heartfelt message!
Having been already through one AI wave that came and went, I feel positive that this one will pass too. Will there be people that will suffer because of this wave? Yes, unfortunately there will be.
After this wave passes, there will be another one, and another one. I think it is human nature to seek to make a higher being.
I am not advocating to do nothing, and seeing this wave through more mature eyes than the one I went through in my 20s, the non-facts are clearer to me.
One thing that has crossed my mind many times these last few months, is that I am not sure that everyone that is doing jobs that may be temporarily replaced by today's "AI" would want to keep doing them.
I actually want to ask each and every person that has being laid off because of "optimization" or because "this is what everyone is doing", if they want to continue doing what they were doing just before. Would they want to do anything else?
There are financial considerations in my professional decisions, and I am trying to approach a place where I am executing on my passions for a living.
I am starting to put together a solution for people to pursue their passions AND make a living. It is being done one project at a time, with one or more people at a time.
I encourage everyone to try to pursue their passions AND support others in doing the same.
Very thought provoking. Thank you.
I am pro-AI, but like everything, it requires common sense and foresight. Every day I see people posting benevolent innovations that represent forward progress. But every day I also see stupid things that people created “because they can” not because they SHOULD
And you know this isn't a pro-AI vs anti-AI debate. The technology is stunning and holds tremendous potential. It's just being hijacked by misguided business models that have not been thought through very well.
I love the idea of an AI slowdown of some sort, but I fear that we're simply stuck in a prisoner's dilemma, a powerful flywheel reinforced by money ad nauseam.
This is certainly a challenge but we should note that we have agency as people if we take it grassroots and act in unison. We do have a huge amount of the public in support, at almost 45% being very worried. Even a few thousand people protesting should get our representatives to at least comment and talk to us.
This is why we have PauseAI - join us, and see what we can do together.
Yeah, it has to be pretty organic and grow over time. Thanks for the food for thought, shon!
We might not have a lot of time, which is why we need to act. As you said, its moving very fast and so we are getting the word out.
This text is both beautiful and insightful.
I think everyone here knows what AI is (except me). Because in the article and in the comments you say this AI will kill human creativity, but how? Because it can create better, because it can create faster?
As far as I understand AI is data, maybe all the digital data from all the Internet sources. Where is it, under that mountain in Utah? It must also include all the Google, Apple, Microsoft, Meta and Amazon profiling that has been collected on every one of us for the last 25 years. Therefore it is all the repeat of yesterday. Then AI is speed. It can run through all that data fast enough that you won't get bored waiting.
Then it gathers together something like a trend or a tendency related to what you asked for. Everything on the Internet doesn't point toward one direction. so it has to throw out all of what it is programmed to reject. That's anything counter to the current narrative, and then out comes a politically correct answer.
Yann Lecun runs AI for Meta. He said that there will never be an unbiased AI, LLM. The theory is that there will be 100's of AI providers, (Large Language Models), so just go and check with another platform for another version of reality. His vision is that everyone will have their personal AI assistant. Kind of like a hyper Siri or Alexa. You will remember very little about life, because you will ask the assistant for every little thing, and it will do it better that you could possibly come close to. What is my shopping list for today, oh I'm too tired to go, call for delivery. What's the charge on my electric car? Oh I plugged that in last night Sir, it is 100%. Did you feed the cat? Of course Sir, she ate 94 grams and drank 14 milliliters of water. Her heartbeat and blood pressure are normal. She will rest for an hour, then I will pet her for you, 12 strokes.
The real boon of AI is that all of the surveillance of your person will not be necessary. That's because you will be trained to read into your data file every minute movement or thought in your life. You're already doing it through your emails and your smart phone, so no big deal. Just a little more efficient.
.
All of this simply seems to reinforce that humans were continue to become more invalidated, which is all in all, a bad outcome to put it lightly. And yes, eliminating the value for human creation effectively destroys a lot of the incentive to create; along with destroying the ability to connect by no longer being able to validate if the person that you are speaking to online is a person.
Yann has been consistently wrong but does not want to admit to anything, unfortunately.
A prediction that was wrong at least 8 months before he made the statement:
https://qbnets.wordpress.com/2023/12/01/yann-lecun-proven-wrong-about-text-but-not-honest-enough-to-admit-it/
Yann being confidently wrong about his own topic of expertise:
https://x.com/JeffDean/status/1786992222036209725
Yann claiming 4 year olds are smarter than LLMs:
https://x.com/tomosman/status/1750461836078764367
Which completely falls apart when you actually look at the useful knowledge an LLM has. If showed GPT-4o a picture of a town in italy, it can able to tell you the name of the town, its history, it's food specialties, its most touristic spots, the plants and animals in the area. A 4 year old would not.
I have no faith in Yann Lecun as a teller of truth. His position at Meta means that he is the chief for spreading their rhetoric. So without checking the source, (I have little interest), I would say that his "wrongness" is what Meta wants us to believe.
In an interview he was taken off-guard by the question: what is the business model, how does it pay for itself? That is when he coughed and flinched and mumbled something about the advertising model. Total Bull, Big Tech only makes money on surveillance.
When referring to a four-year-old, he is referring to "Kinetic Intelligence". That is balance and orientation in the physical world without any language or words. He claims an LLM doesn't have that, (and says, never will). Never is a long time.
This feels like Luddites who smashed mechanical machines because they feared jobs being taken away from human labor. AI is a tool. Nothing more. If an AI breaks the law the owner of the AI should be held accountable. Humans have a bad habit of blaming "things" vs. blaming humans who created faulty things or humans who use things to harm others.
It's way too easy to blame "things" rather than face the difficult challenges of how to deal with faulty human decisions and atypical mental states. Sadly, in the USA we seem to have punted on humans with mental issues and have decided that the streets are where we want people with mental challenges to live. I'm very doubtful that trying to survive on the street is a meaningful solution, let alone beneficial to those whose mental state poses challenges for their survival in modern society.
I'm also doubtful that pausing AI will result in any meaningful benefit. The likely result is that authoritarian governments will use the pause to advance their own AI efforts and leave the "free"/democratic world defenseless to new AI technology.
AI is not just a tool, but essentially some vision of an alternate digital species and thus merits significantly more concern; but even from the purely humanistic angle, it is being introduced into our world by the same people who seem to be pretty heedless of the harm they are causing and thus it does not seem like this will improve as they further disempower us.
As for other nations, this is why coordination is a strategy. As with nuclear weapons, despite major ideological differences, most of humanity can agree that we don't want to die and we don't want to be disempowered.
Today's "AI" is based on statistical analysis to determine a next likely step using statistical feedback to predict a next step in a problem. Today's systems are super limited and require massive amounts of compute power to train to solve particular problems. The output of these models feels like reasoning to some. But is it?
Assuming this strategy is the right one to achieve AGI, we are likely decades away from having the compute power to do so. But it's also very possible that today's AI strategies are barking up the wrong tree with regards to solving for AGI and new ways of approaching the problem will need to be identified.
In the meantime, today's AI definitely has uses. Uses that include national defense. I'd prefer to see my own nation stay ahead of the curve to the extent that it helps defend against other nations who might use AI to do us harm. It could be as simple as defending against an enemy's autonomous drone swarms or something that goes after key infrastructure via software means. I see those types of scenarios as bigger threats in the short to mid-term timeframe.
It isn't appropriate to say that they only use a "next likely step" but in fact, they appear to utilize concepual spaces to develop something akin to "truth". Whatever it is, it works very well.
https://www.anthropic.com/research/mapping-mind-language-model
I don't know if they are AGI or not(a concept itself questionable) but "rushing to stay ahead" is likely to end with a race to the bottom where we all lose. Again, the nuclear weapon analogy applies here, but this is also true of a variety of other planet-destroying capabilities. As Emmet Shear(interim CEO of OpenAI) said - saying that you want "AI to stay ahead" is a lot like saying:
"I need to build the anti-matter bomb to kill the planet first, because then someone else might kill the planet first."
The correct strategy for the planet and the humans on it, is to prevent the anti-matter bomb from being used to kill the planet.
Additionally, if you want AI for national defense, then that suggests that releasing it openly so that your adversaries will just be able to take your weights and use them to rapidly accelerate their own progress is a losing strategy. Then you should go with the Sam Hammond strategy and treat it as a national priority, with attendant security concerns, rather than let companies run wild with it.
I've been repeating this for some time, but ultimately, it really helps to realize that AI is not like other technologies in general, and is much more akin to "summoning the demon" as Elon said.
Adjust accordingly, for our sake and the sake of your children. The easy and happy idea is "It is just a spicy autocomplete on steroids that cannot reason!" but unfortunately, I really wish that this was true but...it isn't. Here's another example, debunking a previous, preciously held idea:
https://billdembski.com/artificial-intelligence/inference-best-explanation-chatgpt-vs-bard/
"n fact, I thought it forever beyond the reach of artificial intelligence. I commended Erik Larson’s book The Myth of Artificial Intelligence for making what in 2021 seemed like an ironclad case that artificial intelligence research was stymied in trying to model and implement inference to the best explanation, or what has also been called abduction (in contrast to deduction and induction).
In the last two days, however, all that has changed for me."
No one can question nor control the limitless innovation of technology.
Even if it actually harms a majority of human beings. There are a thousand examples of that in the recent past.