Folks, I encourage you to not work for @OpenAI for free:
Don't do their testing
Don't do their PR
Don't provide them training data
https://dair-community.social/@emilymbender/110029104362666915
Folks, I encourage you to not work for @OpenAI for free:
Don't do their testing
Don't do their PR
Don't provide them training data
https://dair-community.social/@emilymbender/110029104362666915
@skepteis Just yesterday, I signed up for ChatGPT Plus for $20.
I have no need for it; I just wanted to experience the new capabilities of GPT-4 first hand, and I think it's well worth the price of a pizza.
Free? Some are paying to do it...
@f4grx @emilymbender Don't they honor robots.txt? I think they should...
But why would you mind? I'm writing this toot knowing that it will be indexed by search engines, read by humans, and perhaps end up in the training corpus of future AI models.
That's what *public* content means to me, and I think it's awesome that the Internet makes it easily accessible to everyone, for any purpose.
@emilymbender The actual problem is that we have no choice. Almost anything we now share on the internet is sucked by AI training. Forums, social media, news articles, everything.
@darkphotonstudio @f4grx @emilymbender People were saying similar things of the World Wide Web and You Tube. Emergent technology is often flawed...
In time, most of the problems with AI chatbots will be solved or mitigated. Its hard to predict how long it will take and what it will look like, but the trend is clear if you've been testing ChatGPT for some time.
@f4grx @codewiz @emilymbender "It's data pollution." <--- This is the number one issue I have with AI right now. It will be used to generate reams of useless text, clogging up the tubes, like digital cholesterol. Actually worse, this is #internetkesslersyndrome just random trash whizzing around, waiting for something else to bump into it, sending more rubbish scattering in all directions, making the internet useless.
@codewiz @emilymbender AI is bad. Period. It is unethical, promoted by big businesses, does not care of anything, tells lies, generates contents intended to fool humans, while stealing and plagiarizing original artworks from artists. There is basically nothing good in this technology. I DONT want to contribute to this.
Search engines have a useful goal of sorting *information* to make it available to others.
AI just generates empty bullshit. It's data pollution.
@f4grx Re-reading this thread, your main objections to AI were:
1. "It is unethical, promoted by big businesses, does not care of anything, tells lies, generates contents intended to fool humans, while stealing and plagiarizing original artworks from artists."
2. "I believe computers lack what can make human intelligence possible, in a very fundamental way"
Let me try to address each one separately, starting from the second, which is more fundamental.
@darkphotonstudio @emilymbender
@codewiz @darkphotonstudio @emilymbender the problem with chatbots is fundamental and cannot be solved in any way.
For (2), I hold the opposite belief, and here's why:
1. Natural Intelligence (NI) exists today without violating the laws of physics, therefore it should be possible to re-create it using raw materials available on Earth.
2. NI evolved gradually, starting from simple neural ganglia to move an organism towards food and away from danger. Therefore, there's at least a path to go from amoeba to human intelligence (and maybe super-human too).
For (2), I hold the opposite belief, and here's why:
* Natural Intelligence (NI) exists today without violating the laws of physics, therefore it should be possible to re-create it using raw materials available on Earth.
* NI evolved gradually, starting from simple neural ganglia to move an organism towards food and away from danger. Therefore, there's at least a path to go from amoeba to human intelligence (and maybe super-human too).
(continuing)
* NI is based on cellular biology, but to the best of our understanding, that's an implementation detail. A neural network with the exact structure and behavior could be implemented using different biology, any transistor technology, or even gears and pulleys.
* Today's computers have billions of transistors, but they're organized very differently from a brain. I agree with you that a multicore CPU + RAM running C++ code is qualitatively different from NI and can't reproduce it effectively.
* However, massively parallel hardware for neural networks is becoming available. OpenAI used TPUs to train and operate ChatGPT, and it's a giant leap. They're getting bigger, faster, cheaper every year.
* If the current trend continues, HW for AI will reach the scale of human brains (~10 billion neurons and ~20 trillion synapses). We're currently very far, which (in part) explains why ChatGPT has limited memory and makes so many silly mistakes.
* Next, we need to figure out training: currently it's done with huge corpora of data... with the limitations that you observed. But it's not the only possible way...
* Baby animals start with some built-in behavior and learn the rest from the environment, using sensorial inputs. Social animals can also transfer knowledge from adults to babies. Training NI is a slow process that takes years for complex animals like humans.
* For AI, we can follow the exact same path as well as many other, faster options. Today we're mainly exploring the other options because they're easier, cheaper and more predictable.
* Supervised and reinforcement learning led to AlphaGo, which can beat any human player at Go (narrow AI) and now ChatGPT, which can speak 20 languages, but gets confused and makes mistakes.
* We're still very far from reaching the physical limits of these techniques.
@codewiz @darkphotonstudio @emilymbender Gears and pulleys. Okay. You have no idea what you are talking about and you are manipulating hypothetical issues like it's just an implementation detail. It doesnt work like that.
@f4grx But it *does* work this way! It's is a fundamental concept of computation theory, and it's been mathematically proven long ago:
https://en.wikipedia.org/wiki/Turing_machine
TL;DR: a mechanical computer will be much slower and less reliable than a modern PC, but both can execute any algorithm and compute the same result.
My point is that electronic computers are qualitatively equivalent to other technologies, even though there are practical differences: speed, cost, size...
I don't think I can prove that AI is good as much as I can't prove that humans are good.
But perhaps I can convince you that appropriately trained AI could achieve human intelligence in any field, and be taught to behave as a "good" human.
The problem is that there's no consensus among humans on the meaning of "good" or "ethical". Currently, OpenAI researchers get to decide what's good for their chatbot, and you're free to disagree and not use it.
@codewiz @darkphotonstudio @emilymbender Just so you know, unless one of your arguments is exceptionnally exceptional, I have no intention to become convinced that ai could be good.
@f4grx @darkphotonstudio Clearly, there's an abyss of disagreement between us. I believe the Internet and the Web are one of humanity's greatest achievement, and is continuing to get better every year.
I doubt we can continue this discussion if you truly believe that AI chatbots were created with the purpose of generating bullshit.
Clearly, there are millions of users who find ChatGPT interesting or useful today in spite of the current downsides and limitations.
@darkphotonstudio @codewiz @emilymbender yes, thats it.
Moreover the www was not created with the purpose of generating bullshit, but to make the world's knowledge available. Quite a difference.
@codewiz @f4grx @emilymbender Just because a technology exists, doesn't mean we should use it. Also, the WWW is becoming increasingly useless and full of ads and bots and SEO websites full of useless bullshit. This only compounds the problem. AI text generation won't solve any issues that outweigh the amount of problems it will create.
@codewiz @darkphotonstudio @emilymbender AI can only be a simulation of NI. if you need all computers of the planet to emulate the intelligence of only ONE individual then there is clearly an efficiency problem and this is the sign that something fundamental is missing. Also kids dont need to digest all stock images of the internet to tell you that a garden they have never seen before is cute.
@f4grx @darkphotonstudio @emilymbender I see these as technological problems that can be solved in time.
The underlying assumption here is that brains are an intricate machine made of simple chemical elements arranged in ways permitted by biochemistry. No magic, no soul.
So, if human brains can operate with 23 watts, it means that there's plenty of room for optimization in present TPU architectures!
Will we get there one day? How long will it take? I don't know, but I know it's *possible*.
You seem to be thinking AI = Capitalism = Pure Evil.
Can't you also imagine AI research as a scientific process, which can lead to discoveries and applications?
Insofar these discoveries are public, they will be available to every business, non-profit, government and individual on the planet.
For instance, this is just one of many open-source GPT projects:
https://github.com/tatsu-lab/stanford_alpaca
@f4grx @codewiz @emilymbender If capitalists can exploit anything, of course it will be worse.
@darkphotonstudio @codewiz @emilymbender I am worried that any future general ai would be even worse than chatgpt.
@codewiz @darkphotonstudio @emilymbender Think at how many compute you need to emulate a SINGLE biological cell. Emulation of NI does not work. Either we find the missing part, or we're going to need every cpu we have to reach half disappointing results.
The implementation detail is a big one.
@f4grx @codewiz @emilymbender It doesn't really matter whether strong AI is possible, in any case, because no one has done it and no one knows how. It's irrelevant to what we have now, this fucking ChatGPT rubbish.
Then I don't understand what you're proposing:: stop all AI research now and make it illegal to train and operate AI models?
Sounds like an extremist Luddite position to me, but perhaps that's not what you're saying...
@codewiz @darkphotonstudio @emilymbender look at this. Is that good? No. What a shitty future because of ai.
@darkphotonstudio We don't get to choose in which world we were born, but we can individually and collectively influence it.
If *you* care about science and knowledge over profits, then invest your time, your money and your talent in support of open-source AI projects and open AI research.
Here's one that's looking for help, and you don't need to be an AI expert to contribute:
https://open-assistant.io/
@f4grx @codewiz @emilymbender if people cared about science and knowledge over profits, I’m sure AI research could be beneficial. But we don’t live in that world.
@codewiz @darkphotonstudio @emilymbender
>You seem to be thinking AI = Capitalism = Pure Evil.
Yes. It steals data from everywhere to increase corporate profit.
>Can't you also imagine AI research as a scientific process, which can lead to discoveries and applications?
No.
Some deep training applications have applications, like cancer diagnostic. This is a very narrow scope and requires human validation.
Unregulated AI has no value as a general tool.
@f4grx Not very constructive, eh? All technologies have potential hazards: electricity kills people, cars kill people *and* emit toxic gases, cameras can be used for surveillance, X-rays cause cancer...
As with any technology, a reasonable amount of regulation will make AI safe enough for everyday use. But it's still too early to say how it should be regulated.
@codewiz @darkphotonstudio @emilymbender My contribution to science and knowledge is to actively refuse to develop any AI related technology.
Just because it's doable does not mean it should be done.
I care about making people aware of walled gardens and abusive tech.
Bobinas P4G is a social network. It runs on GNU social, version 2.0.1-beta0, available under the GNU Affero General Public License.
All Bobinas P4G content and data are available under the Creative Commons Attribution 3.0 license.