I don't care for the internet expert at everything quick (and often wrong) answer generation stuff |
Keep up the good work! |
I'm still waiting for the day where I can just feel it my entire codebase (or at least, a few 1000-line files) and tell it to do some processing with that |
I hate the AI voices in a lot of videos, but they are getting alarmingly better. |
possibly to the point where you can't trust anything you don't see with your own eyes. |
For example, if I give it 20 picture of my face, then say "generate a new angle of my face with a mustache". |
Yes, they're wrong sometimes, but so can a real professional. |
A professional can also tell you when they're not sure about the answer they're giving. |
It's not perfect, but professionals can similarly be wrong. |
But when do we ever have access to an actual professional? |
I don't see how reliability issues won't be a thing of the past |
If you want to say that it's a glorified search engine |
It could very well be that these issues are solvable, or that instead we're seeing a fundamental limitation of the technology that can't be worked around no matter how much data or compute you throw at the problem, at least not without a complete change in paradigm. |
This is fast, but it doesn't allow reasoning or introspection |
No search engine has reasoning capabilities, so I'd never even come close to calling it a glorified search engine. |
As a learning aid alone, AIs are insanely good. The amount of time I've saved by learning from AI models is huge. Things that would once take me hours of good research just to find good information on, then hours and days longer to fully comprehend, can now be done in a fraction of the time. |
You could conjure up a problem that's never been solved before, and the AI may be able to reason and solve it. |
What they understand well, they never fail at. |
It's completely reasonable to think that the way AI learns is not a "bad" method, especially considering how it's doing in a few years what took evolution billions of years to accomplish. |
I would encourage you to use the smarter models if you can, as they're not like the weaker models at all. |
Well, neither do LLMs, so the comparison is quite appropriate. |
In the past Google could have fulfilled the same need. |
Nah. It may be able to solve it, but certainly not through reasoning. |
"Understand" is an unnecessary anthropomorphism which is not applicable to LLMs |
NNs can do almost none of the things real brains do |
I'm not exaggerating when I say all they can do is regurgitate input they've seen before, perhaps with some substitutions. |
If you define reasoning so strictly that what an LLM does is not reasoning, then you may end up making it so strict that even humans aren't reasoning by your standard. |
In this case, its more about the end result than how you get there. |
The question is why would an LLM not be able to reach our level of reasoning capabilities? Or at least a comparable level for whatever we'll call their way of reasoning is. |
A lot our creativity comes about from how mistake prone we are, how imperfect we are. If we could draw perfectly (like an AI), we'd never have variations in art. But because we suck we end up unlocking much more possibilities. |
But we definitely can't do something that would require a "real" imagination, like envisioning a new color that doesn't exist. |
I mean, I don't see how you're argument would be any different than this imaginary conversation between two AIs: AI 1: Humans are smart! Sure some are dumb as balls, but if you talk to the really smart ones, you'll see they're very reasonable and capable intellects. AI 2: I encourage you to talk to the dumb ones, they just do very simple pattern recognition and almost all of their actions are instinctual, reasoning and problem solving are weak and not at the forefront of their thought process. The "smartest" humans work essentially the same. |
If we judge the process instead of the outcome, then we'd have to say human brains are like cockroach brains and we're all operating on the illusion of reasoning. |
Obviously we have reasoning skills because that's what we call information processing in order to reach logical conclusions. |
But in reality, someone who understood the inner workings of a brain's reasoning skills may say the same for us. |
It's tautological to say that humans reason, because that's what the word means. |
Going purely by results, some 60 years ago you might have concluded that ELIZA could reason |
we should not go purely by what outwardly seems to happen |
LLMs don't consider the truth value of propositions, or the relationships between objects. |
If anything, the improvements in technique empowered artists to create more. |
Why is that what "real imagination" is? Again, why are you purposely pretending not to be human to discuss cognitive processes? |
it's that our brains are all slightly different and with different tastes |
If your argument is that, by following my own argument, an AI would be right to call me dumb by its own standards then that's fine by me. |
(Also, I don't agree that dumb humans don't reason.) |
That would be exactly judging the result instead of the process... They're all doing exactly what the physics of their particles determine they'll do. |
We're judging results, not the internals of the process, right? |
If we completely understood how the brain works, that would not undefine reasoning. In fact, it would define it perfectly. |
There is an argument to make that it's not "true" reasoning without consciousness. |
But the fact remains that it's able to "reason" its way through new problems and solve them. |
It's not even at our level yet in a general sense, but has surpassed the vast majority of people in many areas. |
This leads us to the logical conclusion that it can also surpass us in the other areas as well given the right training. |
But if we wanted to see how competent something is, we analyze the results. |
Humans don't consider truth value of propositions inherently. For the longest time, things are true if they've "worked". If we believed nonsense and survived, guess it must be true. |
We have relationships between objects? What is that? Isn't that just relationships between data? Isn't that the entire point of an NN? |
You missed the point here. It's not that imagination goes down as we get better at things, its that we inherently are not perfect beings. The way we taste ice cream is slightly different every time we taste it. The way we think/feel about something is slightly different every time we do it. These uncontrollable variation in thought, action, and feelings gives us a wide array of possibilities - our imagination. |
If we could draw perfectly (like an AI), we'd never have variations in art. |
How we as humans perceive imagination and what imagination really is. I say "real" imagination since people tend to think of imagination as this boundless thing that is unlimited. The reality is much different. We are not good at judging ourselves. It's entirely possible an alien race that has access to some weird "real" imagination would look at all of our arts and be dazzled by how similar it all is and how nothing we've created is from real imagination. |
Yes, but this only proves the point of how limited our imaginations are. We need an entirely new set of starting variables to achieve variations in imaginations - pretty much just like with AI. |
You can't judge a system's worth by its internal workings but rather by the outcome. |
I'm making an argument for judging the results! |
What if 7-zip had consciousness and could observe its own code processing? Would 7-zip now have reasoning? |
But no, reasoning skills it processing information to reach logical conclusions broadly. If you can only do so with a particular set of information and only output a particular set of outputs, that wouldn't be reasoning |
Reasoning is when you can process lots of types of information to generate lots of types of logical conclusions. |
This means when we solve math, we are reasoning. When a calculator solves math, its processing. |
Again, this seems like semantics. If we define walking as a human putting one foot infront of the other, than only humans can walk. |
I've not seen evidence that LLMs can solve truly novel problems no human has ever solved before. |
I don't think neural networks bring anything new to the table in terms of surpassing human capabilities. |
I just find it sad that you need to denigrate your own species in a desperate attempt to defend a piece of technology. |
I guess go read a bit about cognitive science and animal intelligence. |
Data != ideas |
Which is it? Does creativity happen at creation or at perception? |
Uh... So what's "real imagination"? |
Why should I grant the idea that a non-human could have such a "powerful" imagination that makes mine seem fake? |
Human imagination is limited not because of lack of capability, but rather first due to laziness |
Human beings are equivalent to cockroaches only if you look at them ignoring all the internals |
Are you saying that if 7-zip's code was executed by a human that he would lose his reasoning? |
The power of reasoning is that it's inherently limited |
7-zip can process any file you give it, be it images, audio, or text. |
I'm hoping you're using "solve math" in two different senses, and not saying that the nature of the action changes fundamentally based on the thing that performs it. |
The verb "to reason" appeared at a time when the only things that reasoned were humans |
If its a problem never solved before, then they have. |
These AIs have allowed us to simply talk to a computer and it have it respond back in natural human language. This is a necessary stepping stone in furthering its abilities. |
My statement was simple, humans suck at evaluating truths. How do our brains decide whether something is true or not? When we consider the truth value of something, are we doing so accurately? The truth is, we suck at it. People thought human sacrifices would make it rain and that thunder was the anger of the heavens. |
You can ask some AIs and they'll point out the misinformation and reject it while people will eat it up. |
Ideas are a type of information - all information is data. Our "ideas" or connections are our connections between data. I'd say similar to an NN. |
Our imagination is powerful for sure, but the limitations are clear. |
But humans are clearly not anything like cockroaches, even to an alien, when just comparing our accomplishments. |
Sure, but it can only process it in a single way. This is like being able to ask, "why" after anyone says anything. Yes it works for any conversational input, but clearly isn't reasoning. Reasoning requires input information and a target question to answer. When something has the ability to reason, it means it can do this with a wide array of topics, taking in different types of information and reasoning for different types of questions. |
When a human applies a formula and a computer does it, the fundamental process is completely different. We are reasoning, figuring out what variables refer to what, plug in what where, etc.. The computer is simply executing predefined instructions. |
Perhaps, but fundamentally it refers to thinking logically. If we assume human-thinking is not the only form of thinking, then its not unreasonable to say AI is reasoning. |
What do you mean by that? |
So in other words, they don't bring anything new to the table in terms of surpassing human capabilities. |
We don't have access to ultimate truth, just to a our singular reality, so there's no way to know. |
Why, I had no idea LLMs had access to ultimate truth! |
Yes, NNs contain data. You have your work cut out for you to show that they also contain ideas. |
If the color doesn't exist then imagining it is inherently a contradiction. A color isn't something that's real, it's a cognitive process. Second, if the color doesn't exist then I can imagine anything (by principle of explosion). |
So far you've cited one example |
I can easily imagine something so alien that it doesn't understand the difference between the two. |
The reason you know 7-zip doesn't reason is not because of any particular feature of its inputs and outputs, but rather because you know how 7-zip is constructed. |
It seems fundamentally impossible, given that sometimes a reasoning system and a mechanical system can produce identical outputs. |
If I take your words as you've put them here, then necessarily LLMs don't reason, since what they're doing is fundamentally different from what we do. |
They don't get tripped up when you present a scenario that contains a subtle mistake but is otherwise similar to something they've seen before, they just trudge along like nothing is wrong. |
This is why LLMs cannot solve problems that require solutions that are unlike what they've seen before. |
Like a new algebraic problem. Or a new riddle. The problem itself has never been solved, but those "types" of problem isn't new. |
Is there a human alive that I can go to and ask anything about in any subject matter and receive a comprehension professional answer from? |
Is there a human who can generate a whole working program in a matter of seconds? |
Even if we give it to them that they can't evaluate the fact of the matter until they try, perhaps a lot of trying.. They keep going. At what point do they say, "You know what, with the facts and evidence we have gathered the past 10-20 years, I don't think this sacrificing stuff is working"? |
Wildfires are burning in LA and there's thousands of people right now in comment sections saying its the literal wrath of God for being a liberal state with LGBTQ people. Did they consider the truth value of their beliefs anywhere near accurately? |
If someone tells me the sky is blue and the other tells me the sky is an illusion made by the government and NASA to keep me subservient.. I'm an idiot if I even give the second guy the benefit of, "well, I don't have access to ultimate truth so I guess I can't actually really know!" |
I'll spare you the journey, in the end, there's some circular defining of things, but I don't see why we can't say chatGPT "thinks". |
principal |
If our brains, without us, can literally invent fucking colors, why can't we imagine a new one? |
If our imagination can't do that, it's limited - because clearly it's possible since it has literally already happened. |
And what's the point of an imagination if it can only imagine things that already exist? It doesn't conjure new things entirely, it makes new from old as far as I can tell, and that's the biggest limitation. |
Imagine the sensory input of a new sensory organ. You just can't do it. Why not?! Our brain HAS done this. |
Sure, but a scientifically advanced alien of any kind would easily be able to tell the difference. |
If I give 7-zip any sort of data, the output is always gonna be basically the same - just that data compressed. This would clearly be processing, as all data is processed the same way. If I give a reasoning box any sort of data, the output will be an analysis of that data, with viewpoints formed and questions asked. |
To that end, if you were to put a good AI and a human in a black box and run this experiment, would you be able to tell if one wasn't reasoning (from analyzing the reasoning aspect of the output alone)? |
It seems to me that AIs are limited, because they are expected to give correct answers right away. |
Allowing them to run indefinitely as we do, constantly allowing them to generate new ideas from old ones (which, is that not what we do fundamentally?), test them, discard bad ones, and repeat. Imagine them doing this to solve a novel problem that requires a new framework. Is it not possible? |
The idea that an NN cannot be reasoning is a little odd to me. We can look at evolution and probably agree we would never have looked at early brains and thought them capable of reasoning. The beginnings of our brains were simplistic. |
If we saw the inner working of a cockroach brain, you would perhaps say the same thing, "Look at it, its just reacting to stimuli through pattern recognition! It would never solve novel problems!" |
Given that the answers LLMs provide are encyclopedia-like, not ones that require specialized knowledge |
A correct program is more difficult, but AIs can't do that reliably either. |
You're talking about pre-scientific societies, remember? |
Are you saying there's no internal syllogism that links the propositions "LA is burning" and "it's burning because of God's wrath"? |
AI can detect the tone of misinformation |
"Principle". |
It could be that the datatype for color is complete. If we can fit 4294967296 numbers in an int, why can't we fit one more? |
If you plugged new eyes into it that can perceive a new wavelength of light and signal it to the brain, would the brain evoke a brand new color or an old one? |
but what is the point of imagining something with no bearing in our lives? |
Don't tell me what I can or can't do! |
Of ANY kind? |
If I say "I'm an ass man" you can know that it's my opinion because you know I'm a person. If a program says the same thing, how do you tell whether it's its opinion or something the program is programmed to say? |
There's no intrinsic property in the data that lets you make that judgement |
...we've historically defined reasoning based on how humans do it because that's all we had to observe. But perhaps that's too narrow a view. Just as we've expanded our understanding of intelligence to include different types (emotional, spatial, etc.), maybe we need to expand our understanding of reasoning to include different mechanisms. Just as a bird's flight and an airplane's flight are both legitimately "flying" despite using different mechanisms. |