Step 5: #inkscape
Now I had the base outline on which I would draw. Converting every piece into a vector was not a trivial task too.
Step 5: #inkscape
Now I had the base outline on which I would draw. Converting every piece into a vector was not a trivial task too.
At Rikugien gardens.
Although in both cases, I did not receive what I wanted as-is. But, it provided a good base case for me to work on, time to draw!
@ben ๐
@srevinsaju @smoldesu For a GPT, is the model size affect the number of neurons in each layer? Then, it would be somewhere between N^2 and N^3, without considering the effects of caches and the difference between layer type.
I imagine changing the number of layers is more complicated, but for inference layers should have linear cost... right?
My experience is limited to reading ML blogs and watching video lectures ๐
@f4grx Not very constructive, eh? All technologies have potential hazards: electricity kills people, cars kill people *and* emit toxic gases, cameras can be used for surveillance, X-rays cause cancer...
As with any technology, a reasonable amount of regulation will make AI safe enough for everyday use. But it's still too early to say how it should be regulated.
@darkphotonstudio We don't get to choose in which world we were born, but we can individually and collectively influence it.
If *you* care about science and knowledge over profits, then invest your time, your money and your talent in support of open-source AI projects and open AI research.
Here's one that's looking for help, and you don't need to be an AI expert to contribute:
https://open-assistant.io/
Then I don't understand what you're proposing:: stop all AI research now and make it illegal to train and operate AI models?
Sounds like an extremist Luddite position to me, but perhaps that's not what you're saying...
I've been playing a bit with #OpenAssistant, an open-source #AI chatbot similar in scope to #ChatGPT:
https://open-assistant.io/
While it's in early stages, it looks very promising. If you have some time, you can contribute by manually labeling / ranking prompts and responses.
This is pretty cool!
You seem to be thinking AI = Capitalism = Pure Evil.
Can't you also imagine AI research as a scientific process, which can lead to discoveries and applications?
Insofar these discoveries are public, they will be available to every business, non-profit, government and individual on the planet.
For instance, this is just one of many open-source GPT projects:
https://github.com/tatsu-lab/stanford_alpaca
@f4grx @darkphotonstudio @emilymbender I see these as technological problems that can be solved in time.
The underlying assumption here is that brains are an intricate machine made of simple chemical elements arranged in ways permitted by biochemistry. No magic, no soul.
So, if human brains can operate with 23 watts, it means that there's plenty of room for optimization in present TPU architectures!
Will we get there one day? How long will it take? I don't know, but I know it's *possible*.
@f4grx @darkphotonstudio Clearly, there's an abyss of disagreement between us. I believe the Internet and the Web are one of humanity's greatest achievement, and is continuing to get better every year.
I doubt we can continue this discussion if you truly believe that AI chatbots were created with the purpose of generating bullshit.
Clearly, there are millions of users who find ChatGPT interesting or useful today in spite of the current downsides and limitations.
I don't think I can prove that AI is good as much as I can't prove that humans are good.
But perhaps I can convince you that appropriately trained AI could achieve human intelligence in any field, and be taught to behave as a "good" human.
The problem is that there's no consensus among humans on the meaning of "good" or "ethical". Currently, OpenAI researchers get to decide what's good for their chatbot, and you're free to disagree and not use it.
@f4grx But it *does* work this way! It's is a fundamental concept of computation theory, and it's been mathematically proven long ago:
https://en.wikipedia.org/wiki/Turing_machine
TL;DR: a mechanical computer will be much slower and less reliable than a modern PC, but both can execute any algorithm and compute the same result.
My point is that electronic computers are qualitatively equivalent to other technologies, even though there are practical differences: speed, cost, size...
* For AI, we can follow the exact same path as well as many other, faster options. Today we're mainly exploring the other options because they're easier, cheaper and more predictable.
* Supervised and reinforcement learning led to AlphaGo, which can beat any human player at Go (narrow AI) and now ChatGPT, which can speak 20 languages, but gets confused and makes mistakes.
* We're still very far from reaching the physical limits of these techniques.
* Baby animals start with some built-in behavior and learn the rest from the environment, using sensorial inputs. Social animals can also transfer knowledge from adults to babies. Training NI is a slow process that takes years for complex animals like humans.
* If the current trend continues, HW for AI will reach the scale of human brains (~10 billion neurons and ~20 trillion synapses). We're currently very far, which (in part) explains why ChatGPT has limited memory and makes so many silly mistakes.
* Next, we need to figure out training: currently it's done with huge corpora of data... with the limitations that you observed. But it's not the only possible way...
* Today's computers have billions of transistors, but they're organized very differently from a brain. I agree with you that a multicore CPU + RAM running C++ code is qualitatively different from NI and can't reproduce it effectively.
* However, massively parallel hardware for neural networks is becoming available. OpenAI used TPUs to train and operate ChatGPT, and it's a giant leap. They're getting bigger, faster, cheaper every year.
(continuing)
* NI is based on cellular biology, but to the best of our understanding, that's an implementation detail. A neural network with the exact structure and behavior could be implemented using different biology, any transistor technology, or even gears and pulleys.
๐ฎ๐น โ ๐บ๐ธ โ ๐ฏ๐ต โ ๐น๐ญ โ ๐Nomadic Linux developer, currently in Los Angeles.#linux #rust #anime #spacex #cycling #travel #vegan #retrocomputing #amiga #fedi22๐จ๐ด๐ฐ๐ฎ๐จ :amiga:
Bobinas P4G is a social network. It runs on GNU social, version 2.0.1-beta0, available under the GNU Affero General Public License.
All Bobinas P4G content and data are available under the Creative Commons Attribution 3.0 license.