AI is something special

Pages: 12
I feel like AI is getting overlooked by many. I feel like I have a real professional in any field I want to learn about just waiting for me to ask things.

Yes, they're wrong sometimes, but so can a real professional.

Through one of my AI training gigs, I have access to several different AIs, including GPT4o, GPT o1, and Claude 3.5 Sonnet/Haiku.

Having them all is an insane learning and productivity boost. Topics that are difficult to Google (because results come back for something else, you don't know the perfect wording on the subject, its niche, etc.) are just answered by these models.

If I'm suspicious of one model's answer, I go to another.

Of course I don't trust these AIs with any important information or something that could identify me, but that's not a difficult issue to get around.


Their context windows and token limits are huge, as I've been able to just send them huge pieces of code for it to go through. This was especially helpful for my encryption program, since it's nearly a thousand lines long just for the encryption logic and I completely forgot pretty much all of it by the time I wanted to make some tweaks 🥲.



If you're a true expert in your field, then yes the AI may not be at your level, though it's definitely getting there and the level of training they're paying for is for very specialized areas of expertise nowadays to close these gaps.

However, no one is an expert in everything.. except for these AIs. They have weaknesses in certain fields (which is model dependent), but simply makes up for it in sheer vastness of knowledge.


They're growing in knowledge at a rapid pace, and I don't see any reason for the shortcomings of these AI to not be addressed and improved until they're nonexistent.


Eventually, we might have something advanced enough to make us wonder at what point is consciousness simulated so well, that it simply becomes consciousness itself.
Last edited on
AI will excel at some things and may never be trusted with others, or not for many decades yet to come. I just saw where a team set up a race car with AI and first thing it did was slide off the track ass end forward and crunch up the platform -- something humans are equally good at, for sure, but the AI was supposed to know better. And a couple years back I saw where an AI generated some new materials with some breakthroughs in chemistry, saving who knows how much R&D time.

Probably about 2005 I programmed several that did simple things perfectly... one was a throttle control on a boat that adjusted to match the wanted speed, even if you had a tailwind or current or headwind or whatever external factors, not exactly rocket science but it did its job and better than a person would for the same task as its reaction times were just off the charts better.

Even if you are teaching it nonsense, its a very important field. I despise the uses that are going to dominate the field in the near future (generating ads, tracking people, cheating on homework is already a big problem, etc) but even those things will generate useful tools for other things eventually.

I don't care for the internet expert at everything quick (and often wrong) answer generation stuff (again, it will lead to something, but right now its a mess). But I believe that a deeper trained specialist AI for one or more related fields to solve specific types of problems will really tear things up in some areas ... and SOON.

Anyway, you are doing something important. I honestly though AI was going to stagnate (say around 2000) and just be a slightly better GIGO engine, advanced lookup table or approximation function (depending on the training, type, etc) but it has really taken off in new ways. Keep up the good work!
I don't care for the internet expert at everything quick (and often wrong) answer generation stuff

It just isn't wrong that often anymore. Though this does depend on which model you use. For example, Windows 11 "Co-Pilot" comes with a just-ok AI that can often be wrong.

But Claude 3.5 Sonnet has really impressed me. I've been using it for a little over a month now, and it's been wrong maybe 2-3 times. I've been asking it for information on lots of topics, so this is very impressive.

There are definitely things it struggles with, like problems that require multiple complex steps to solve (it'll get some of the steps right, but it only needs to fail on one of those difficult steps to get a wrong answer).

But in general, if you use the AI to supplement your work instead of just having it do your work for you, it's unlikely to lead you astray.


The biggest issues I see are with the lighter models. They make little mistakes all the time. I gave GPT4o-mini code output that was formatted and just told it to give me the numbers back unformatted and comma-separated. It gave it back to me with two wrong numbers. The non-light models would almost certainly never make a mistake like that.

Keep up the good work!

Training them to replace me lmao
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”

“What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking—there’s the real danger.”

-Frank Herbert

I'm being overly dramatic, for now. I've used an LLM at work to help me with some menial tasks. I'm still waiting for the day where I can just feel it my entire codebase (or at least, a few 1000-line files) and tell it to do some processing with that. Probably such programs already exist for a steeper price.

I do find it funny how there's so much AI slop now on the web that you need to be careful about accidentally using it as training data, else you get incestuous recursive artifact patterns / recessive traits showing up (model collapse). I also find it annoying that there's so much AI slop on the web; I hate the AI voices in a lot of videos, but they are getting alarmingly better. I find it concerning how AI will be used to invade privacy, used for fraud/scams, and be used to create fake images/videos for political exploitation—possibly to the point where you can't trust anything you don't see with your own eyes.

Does anyone know if there exists an AI model where you could feed it a few dozen images and have it generate new images based off the sources images + text prompts? For example, if I give it 20 picture of my face, then say "generate a new angle of my face with a mustache".
Last edited on
I'm still waiting for the day where I can just feel it my entire codebase (or at least, a few 1000-line files) and tell it to do some processing with that

I was able to throw over a thousand lines of code at Sonnet, though it was for a rather simple task. Haven't tested how well it would actually be able to change and do big things.

I hate the AI voices in a lot of videos, but they are getting alarmingly better.

I think they can be rather human-sounding and a lot of the voices out there are designed purposefully to sound just odd enough that you can know its AI.

The best AI voices out there are VERY hard to distinguish from a real person. You have to listen to them for a while and see if you catch anything odd.

Of course, this is assuming a human or an AI that isn't constantly professional sounding (which exist) wrote the script that they're speaking.


possibly to the point where you can't trust anything you don't see with your own eyes.

The AI companies are performing countermeasures for this, but who knows how effective they'll actually be.

For example, if I give it 20 picture of my face, then say "generate a new angle of my face with a mustache".

Yes, I've seen an AI being advertised on some social media that'll take a picture of you and generate new photo-realistic pictures of you. I just wouldn't know what it's called.
Yes, they're wrong sometimes, but so can a real professional.
A human professional can be wrong in subleties, or about stuff where no consensus has been reached yet. A professional can also tell you when they're not sure about the answer they're giving.
An LLM will confidently tell you a grossly wrong answer and then correct itself when asked again.

Speaking for myself, I use AIs in three ways:
1. Literally as toys.
2. As reformatting aids (e.g. "convert this JSON into CSV"), or similarly as style anonymizers.
3. If I can't be fucked to write some boilerplate code that does something trivial.
4. If I don't know where to start with researching a topic.

Even with point 3 I've had problems. For example, I needed to run a child process and capture its output, and I was getting annoyed at the asynchronous API to do it, so I asked ChatGPT to do it for me. The first two answers had issues I had already addressed in my own code. A newbie wouldn't have caught those issues and would have pushed subtly broken code.

So in summation I think the very opposite is true. AIs are way, way, WAY overhyped. They're fun as toys, and they're kind of useful as replacements for dumb interns that need constant supervision, and that's about it. The best application I've seen for LLMs is still https://neal.fun/infinite-craft/ , so that's telling something. When the AI winter finally hits, it's gonna hit hard.

https://www.reddit.com/r/TrueSTL/comments/10qy5x9/todd_based/
All this really depends on the AI model you're using. Most models are made for the masses, so they're weaker.

Even just a few months ago there were complex mathematical/coding problems I'd give the AIs and expect them to completely fail at, and now they're crushing it.


I recently wanted some C# code that would use the motherboard's TPM to encrypt some data. Not a single AI model was able to actually give me working runable code the first time.

When I go look at the documentation, it's no wonder it struggled (especially considering how niche this likely is). I linked the AI a site that had example code (would've taken me forever to go through and figure it out myself), then it was like "thanks" and spat out working code.



A professional can also tell you when they're not sure about the answer they're giving.

ChatGPT is the biggest culprit here, as I've found that Claude 3.5 Sonnet will tell me if it's unsure of its answer (sometimes by itself, other times if you point out a mistake it'll tell you what gap it has in knowledge that may be affecting it).

It's not perfect, but professionals can similarly be wrong. Less likely? Yes. But when do we ever have access to an actual professional? This is the next best thing - and that's only if we wanted a professional in just a single field.


They're only getting better, and I don't see how reliability issues won't be a thing of the past. Especially as these AIs get integrated into real-world applications, they'll start learning from actual experience data as well.
It's not perfect, but professionals can similarly be wrong.
No, not similarly. It's not even a question of how likely it is that each is wrong. Humans and LLMs work in fundamentally different ways. A mechanic for example could make a mistake about the best way to fix an issue with an engine, but he won't think that the most likely diagnosis for the problem is that the car has atherosclerosis and needs a coronary bypass, because he's gotten confused and thinks you're writing a fiction about a living car.

But when do we ever have access to an actual professional?
Well, LLMs are trained on things on the Internet, so if the LLM knows about it, it must be out there somewhere. If it's not, then it's making shit up. If you want to say that it's a glorified search engine, yeah, sadly Google sucks nowadays, so you're right.

I don't see how reliability issues won't be a thing of the past
I used this example a few days ago talking about quantum computers, and I think it's apt here too: When galvanic batteries were discovered, people thought that the corrosion around the electrodes was something that could be solved with more research, when in fact it was what allowed the battery to work at all.
It could very well be that these issues are solvable, or that instead we're seeing a fundamental limitation of the technology that can't be worked around no matter how much data or compute you throw at the problem, at least not without a complete change in paradigm.

The way I've thought about it is that NNs work similarly to a reptile brain. You give the network some stimulus, it analyzes it in broad strokes and gives a response. This is fast, but it doesn't allow reasoning or introspection. The human brain instead reasons through symbols. "Carrot" is not merely a sequence of tokens, but a symbol unto itself. Your brain understands the idea of "carrot" and how it relates to other ideas, which is why you'll never construct a sentence like "it's carrotting outside" by mistake. Not because it's an unusual sentence you've never seen before, but because you have higher level cognitive functions that keep track of the relationships between concepts.
In short, I think LLMs need symbolic reasoning, like what CASs do. It's a fundamentally different technology compared to playing statistical tricks with strings of text, though.
If you want to say that it's a glorified search engine

No search engine has reasoning capabilities, so I'd never even come close to calling it a glorified search engine.

You could conjure up a problem that's never been solved before, and the AI may be able to reason and solve it.

It could very well be that these issues are solvable, or that instead we're seeing a fundamental limitation of the technology that can't be worked around no matter how much data or compute you throw at the problem, at least not without a complete change in paradigm.

Reliability issues, in this case, stem from a lack of actual understanding from the AI models. What they understand well, they never fail at. Some models better recognize what areas/knowledge they may be lacking than others.

If the AI is highly reliable in any aspect, then there's no reason to think it can't be trained enough to become the same way in other aspects.

This is fast, but it doesn't allow reasoning or introspection

Again, this is model dependent. Having used the O1 model (and trained too I think..), they take a problem, create a plan to solve the problem, reasoning out every step of the plan.


There are clear differences between us an AI, yes. But this completely ignores that our intellectual brains evolved from those reptile brains! And there's no particular reason to think that our brains are the only method (or even a good method!) for logical reasoning. We know our brains survive first, reason second. Only a small minority of us humans actually fully appreciate the reasoning capabilities of our brains and leverage it against our.. less reasonable aspects.

It's completely reasonable to think that the way AI learns is not a "bad" method, especially considering how it's doing in a few years what took evolution billions of years to accomplish.

Of course evolution didn't have billions of dollars in funding, but the main issue with evolution was it needed a good brain design. Once the brain is designed, that brain can learn and adapt.

With AI, we've already created a design that can learn and adapt, but we are so rapidly trying to get this design taught and adapted in a few years to things that still took us hundreds of thousands of years to reach.

The fact it has gotten so far in a short time is insane.


I would encourage you to use the smarter models if you can, as they're not like the weaker models at all. They are much more reliable, have strong reasoning skills (usually), and can be better trusted to handle information/tasks without screwing up.


As a learning aid alone, AIs are insanely good. The amount of time I've saved by learning from AI models is huge. Things that would once take me hours of good research just to find good information on, then hours and days longer to fully comprehend, can now be done in a fraction of the time.
No search engine has reasoning capabilities, so I'd never even come close to calling it a glorified search engine.
Well, neither do LLMs, so the comparison is quite appropriate.

As a learning aid alone, AIs are insanely good. The amount of time I've saved by learning from AI models is huge. Things that would once take me hours of good research just to find good information on, then hours and days longer to fully comprehend, can now be done in a fraction of the time.
Yeah. That was my point. In the past Google could have fulfilled the same need.

You could conjure up a problem that's never been solved before, and the AI may be able to reason and solve it.
Nah. It may be able to solve it, but certainly not through reasoning.
Some models can mimic reasoning by dumping their state into the context as extraneous words unrelated to the answer, thus creating a loop between their input and output. You can really mess with those by insisting that they reply only with the actual answer, and nothing else.

What they understand well, they never fail at.
"Understand" is an unnecessary anthropomorphism which is not applicable to LLMs. A cockroach doesn't understand that it's in the light when it scurries away, it just reacts automatically in the only way its mechanism permits. You don't understand that fire damages you when you pull your hand away, you're just doing what your nervous system instructs. Understanding never enters into it.

It's completely reasonable to think that the way AI learns is not a "bad" method, especially considering how it's doing in a few years what took evolution billions of years to accomplish.
I mean, you're being more than extremely generous. NNs can do almost none of the things real brains do. For one, and most importantly, they have zero neuroplasticity. Training and inference are two completely separate stages. That means that a model can't truly learn like a brain does. If brains worked the way NNs do, the only way for you to learn something would be to clone yourself, inject the new fact into your clone's brain, and then die.

I would encourage you to use the smarter models if you can, as they're not like the weaker models at all.
Let me encourage you in the exact opposite direction. Try running the dumber models locally and play with them. I've been trying various models (from 7B up to 70B parameters) regularly all of last year, and it really takes the shine off of LLMs. I'm not exaggerating when I say all they can do is regurgitate input they've seen before, perhaps with some substitutions. Larger models don't work in a fundamentally different way (unless their operators are doing fuckery behind the scenes beyond pure LLM inference to make the model appear smarter), so there's no way for them to do something more special. All they have is a larger repertoire of possible responses.
Even image generation models are like that. I've lost count on the number of generated images I've seen with an almost Patreon logo in the corner.
Well, neither do LLMs, so the comparison is quite appropriate.

You can't say this when they clearly can. If you define reasoning so strictly that what an LLM does is not reasoning, then you may end up making it so strict that even humans aren't reasoning by your standard.

In the past Google could have fulfilled the same need.

Fulfilled the need of not needing to spend hours looking through books. Now AI is a generational leap above that.

But this is only one use-case.

Nah. It may be able to solve it, but certainly not through reasoning.

Again, you may be undefining human reasoning.


"Understand" is an unnecessary anthropomorphism which is not applicable to LLMs

Sure, but I didn't necessarily mean it that way. I mean it has strengths where its data is strongest - basically equivalent to what we would think of as understanding.


NNs can do almost none of the things real brains do

In this case, its more about the end result than how you get there. Our brains have their strengths and weaknesses, just as NNs do. The question is why would an LLM not be able to reach our level of reasoning capabilities? Or at least a comparable level for whatever we'll call their way of reasoning is.

I'm not exaggerating when I say all they can do is regurgitate input they've seen before, perhaps with some substitutions.

I'd argue this is hardly different than an actual human brain. A lot our creativity comes about from how mistake prone we are, how imperfect we are. If we could draw perfectly (like an AI), we'd never have variations in art. But because we suck we end up unlocking much more possibilities.

But we definitely can't do something that would require a "real" imagination, like envisioning a new color that doesn't exist.

In this way, AIs are actually handicapped in art and need to learn to have variability - which there are many models that do that pretty well now.


I mean, I don't see how you're argument would be any different than this imaginary conversation between two AIs:

AI 1: Humans are smart! Sure some are dumb as balls, but if you talk to the really smart ones, you'll see they're very reasonable and capable intellects.

AI 2: I encourage you to talk to the dumb ones, they just do very simple pattern recognition and almost all of their actions are instinctual, reasoning and problem solving are weak and not at the forefront of their thought process. The "smartest" humans work essentially the same.


What's the point? If we judge the process instead of the outcome, then we'd have to say human brains are like cockroach brains and we're all operating on the illusion of reasoning. Obviously we have reasoning skills because that's what we call information processing in order to reach logical conclusions.

If AI can do this same thing, then I'd say its reasoning. Just because we know more about how it does it, it seems to make it "feel" fake. But in reality, someone who understood the inner workings of a brain's reasoning skills may say the same for us.

What really matters is the results, the output, whether an AI or human.




EDIT:

You may call the AI's reasoning fake due to it needing data just to be able to replicate what reasoning even looks like, but this is no different than a human brain. The only difference is that our brains have evolved to have this instinctual understanding of the real world which our reasoning skills could then build upon, whereas the AI gains this through training data.
Last edited on
If you define reasoning so strictly that what an LLM does is not reasoning, then you may end up making it so strict that even humans aren't reasoning by your standard.
It's tautological to say that humans reason, because that's what the word means. It's a description of a human cognitive process, made by the thing that has that cognitive process. When we say that something other than a human reasons, we're making an analogy between what humans do and what the thing does. We can do that because we, as humans, are privy to internal details about the process of reasoning that a non-human external observer would not be privy to. There's no need to alienate ourselves from ourselves in order to apply words to things.
If we were to compare only the results of a process to determine whether reasoning happened, we might find that any causal chain of events contains reasoning in some form.

In this case, its more about the end result than how you get there.
Going purely by results, some 60 years ago you might have concluded that ELIZA could reason. When discussing the presence or absence of cognitive process analogs, we should not go purely by what outwardly seems to happen.

The question is why would an LLM not be able to reach our level of reasoning capabilities? Or at least a comparable level for whatever we'll call their way of reasoning is.
Because we know what an LLM does, and it doesn't fit what reasoning is. LLMs don't consider the truth value of propositions, or the relationships between objects. All they can do make analogies between the text last seen and all the text they've seen before.

A lot our creativity comes about from how mistake prone we are, how imperfect we are. If we could draw perfectly (like an AI), we'd never have variations in art. But because we suck we end up unlocking much more possibilities.
I don't agree at all. Creativity didn't go down in the Renaissance, when the quality of art made a huge leap. If anything, the improvements in technique empowered artists to create more.
The remixing part of the creative process does not happen on the medium, it happens inside the artist's brain. If you want a why for creativity, it's that our brains are all slightly different and with different tastes, and creation happens on the interplay between slightly different, highly complex systems. If we were all exactly identical there would be no need for creativity. We'd eventually coverge to a maximum of aesthetic appeal that would apply to everyone equally.

But we definitely can't do something that would require a "real" imagination, like envisioning a new color that doesn't exist.
Why is that what "real imagination" is? Again, why are you purposely pretending not to be human to discuss cognitive processes?

I mean, I don't see how you're argument would be any different than this imaginary conversation between two AIs:

AI 1: Humans are smart! Sure some are dumb as balls, but if you talk to the really smart ones, you'll see they're very reasonable and capable intellects.

AI 2: I encourage you to talk to the dumb ones, they just do very simple pattern recognition and almost all of their actions are instinctual, reasoning and problem solving are weak and not at the forefront of their thought process. The "smartest" humans work essentially the same.
If your argument is that, by following my own argument, an AI would be right to call me dumb by its own standards then that's fine by me. I have no interest in convincing AIs that I'm smart. AIs also have no interest in convincing me that they're smart; it's other humans who want that.

(Also, I don't agree that dumb humans don't reason.)

If we judge the process instead of the outcome, then we'd have to say human brains are like cockroach brains and we're all operating on the illusion of reasoning.
That would be exactly judging the result instead of the process. Going purely by external appearances, there's no fundamental difference between a human, a cockroach, a plant, and a rock. They're all doing exactly what the physics of their particles determine they'll do.

Obviously we have reasoning skills because that's what we call information processing in order to reach logical conclusions.
The reason it's obvious is because we can apply introspection to observe our own line of reasoning happening. If reasoning is just processing information then 7-zip has reasoning. You give it some information in the form of a file and it reaches the logical conclusion of a compressed file, which correctly meets certain logical properties with respect to the input. We're judging results, not the internals of the process, right?

But in reality, someone who understood the inner workings of a brain's reasoning skills may say the same for us.
If we completely understood how the brain works, that would not undefine reasoning. In fact, it would define it perfectly. We could perform the same analysis on a brain analog and decide how much like our own it is and how much its cognitive processes resemble our own reasoning.
It's tautological to say that humans reason, because that's what the word means.

The meaning of the word and how we interpret the word are slightly different.

There is an argument to make that it's not "true" reasoning without consciousness. But the fact remains that it's able to "reason" its way through new problems and solve them.

No, AI is not perfect, but neither are we. It's not even at our level yet in a general sense, but has surpassed the vast majority of people in many areas. This leads us to the logical conclusion that it can also surpass us in the other areas as well given the right training.

Going purely by results, some 60 years ago you might have concluded that ELIZA could reason

That's not even close to true. Something that tries and ultimately fails at reasoning 99% of the time would never qualify to be said it can "reason". It wouldn't even begin to trick someone.


we should not go purely by what outwardly seems to happen

This is only true when analyzing the internals. But if we wanted to see how competent something is, we analyze the results.


LLMs don't consider the truth value of propositions, or the relationships between objects.

Humans don't consider truth value of propositions inherently. For the longest time, things are true if they've "worked". If we believed nonsense and survived, guess it must be true.

We had to purposefully pull ourselves away from our faulty thought processes in order to reach truths.

AIs DO consider truth values, arguably better than us. Instead of surviving to gauge truth, we feed them high quality data and give them specific complex goals. They'll figure out the truth along the way because the truth works.

We have relationships between objects? What is that? Isn't that just relationships between data? Isn't that the entire point of an NN?


If anything, the improvements in technique empowered artists to create more.

You missed the point here. It's not that imagination goes down as we get better at things, its that we inherently are not perfect beings. The way we taste ice cream is slightly different every time we taste it. The way we think/feel about something is slightly different every time we do it.

These uncontrollable variation in thought, action, and feelings gives us a wide array of possibilities - our imagination.

An AI is inherently held back here because it will not have run to run variances unless purposefully introduced.

Why is that what "real imagination" is? Again, why are you purposely pretending not to be human to discuss cognitive processes?

There's two viewpoints to consider: How we as humans perceive imagination and what imagination really is.

I say "real" imagination since people tend to think of imagination as this boundless thing that is unlimited. The reality is much different.

We are not good at judging ourselves. It's entirely possible an alien race that has access to some weird "real" imagination would look at all of our arts and be dazzled by how similar it all is and how nothing we've created is from real imagination.

it's that our brains are all slightly different and with different tastes

Yes, but this only proves the point of how limited our imaginations are. We need an entirely new set of starting variables to achieve variations in imaginations - pretty much just like with AI.

If your argument is that, by following my own argument, an AI would be right to call me dumb by its own standards then that's fine by me.

The argument is that you made a poor argument. You can't judge a system's worth by its internal workings but rather by the outcome. Imagine this scenario:

System 1: Internal workings: a mess, wtf happened in there? That looks stupid. Outcome: Power efficiency and high problem solving skills that are applicable to any and every field.

System 2: Internal workings: Precise, calculated, and logically sound. Outcome: It runs in circles then falls off a cliff.


The outcome should be how we judge the system first, the internal workings second.


(Also, I don't agree that dumb humans don't reason.)

They wouldn't say that either - only that dumb human reasoning is the basis and works off the same principals for smart human reasoning, so they're all dumb.

That would be exactly judging the result instead of the process... They're all doing exactly what the physics of their particles determine they'll do.

I'm making an argument for judging the results! Not that we should ignore the process, but we have to that with the results in mind. This is secondary to the results.

This isn't the most logically sound approach - yes. If we were gods, we could judge any system by its internal workings and never even have to look at output (as we'd already know what outputs are possible and every limitation of the system).

For a practical human approach, we need to judge the process secondarily to the outcomes. Why do you care if a self-driving car stopped because it detected an object or if it stopped because it values human life, for example. You wouldn't say, "scrap this shit, it only stops because we told it not to hit things!"

We're judging results, not the internals of the process, right?

It's a disingenuous interpretation of my definition of reasoning. I could also just turn it back on you. What if 7-zip had consciousness and could observe its own code processing? Would 7-zip now have reasoning?

The fact that 7-zip has no control over it processing wouldn't even go against your belief that humans have no such control either (determinism right?).


But no, reasoning skills it processing information to reach logical conclusions broadly. If you can only do so with a particular set of information and only output a particular set of outputs, that wouldn't be reasoning, that would be processing - merely a pretext to reasoning.

Reasoning is when you can process lots of types of information to generate lots of types of logical conclusions. This means when we solve math, we are reasoning. When a calculator solves math, its processing.

If we say reasoning must involve consciousness - I may agree. But if consciousness is the ONLY missing aspect, it may seem nitpicky to classify something as not reasoning.

And yes, I am using a different definition of reasoning than what the world likely *really* means, but I do so to talk about the intrinsic value of reasoning rather than whether or not other types of logical thought should or shouldn't be considered reasoning.


If we completely understood how the brain works, that would not undefine reasoning. In fact, it would define it perfectly.

Again, this seems like semantics. If we define walking as a human putting one foot infront of the other, than only humans can walk.
There is an argument to make that it's not "true" reasoning without consciousness.
I disagree. I think it's arguable that automated theorem provers reason, yet are not conscious.

But the fact remains that it's able to "reason" its way through new problems and solve them.
Again, disagree. I've not seen evidence that LLMs can solve truly novel problems no human has ever solved before.

It's not even at our level yet in a general sense, but has surpassed the vast majority of people in many areas.
Non-NN computing had already done that decades ago, in terms or raw computational power, and computers in general had more reliable storage even before then. I don't think neural networks bring anything new to the table in terms of surpassing human capabilities.

This leads us to the logical conclusion that it can also surpass us in the other areas as well given the right training.
Non sequitur. It may be that it can, or that it can't. That's left to be seen, and far from obvious at this point in time.

But if we wanted to see how competent something is, we analyze the results.
But we're not analyzing competency. We're analyzing whether the thing reasons.

Humans don't consider truth value of propositions inherently. For the longest time, things are true if they've "worked". If we believed nonsense and survived, guess it must be true.
I'm not going to argue for the more than sufficiently demonstrated intuitive reasoning capabilities of Homo sapiens. I just find it sad that you need to denigrate your own species in a desperate attempt to defend a piece of technology. I don't know, man. I guess go read a bit about cognitive science and animal intelligence.

We have relationships between objects? What is that? Isn't that just relationships between data? Isn't that the entire point of an NN?
Data != ideas. A neural network has no concept of snow, it just has embedded in its data structure the information that the most likely word after "snow is" is "cold". There's no baseline to anchor the connections between the words (other than the training set), it's just language floating in the ether.

You missed the point here. It's not that imagination goes down as we get better at things, its that we inherently are not perfect beings. The way we taste ice cream is slightly different every time we taste it. The way we think/feel about something is slightly different every time we do it.

These uncontrollable variation in thought, action, and feelings gives us a wide array of possibilities - our imagination.
But you said
If we could draw perfectly (like an AI), we'd never have variations in art.
If the variation comes from how we perceive reality, then even if we could draw perfectly (or make ice cream perfectly each time) we would still end up creating new things, because we'd perceive a perfect drawing differently each time.
Which is it? Does creativity happen at creation or at perception?

How we as humans perceive imagination and what imagination really is.

I say "real" imagination since people tend to think of imagination as this boundless thing that is unlimited. The reality is much different.

We are not good at judging ourselves. It's entirely possible an alien race that has access to some weird "real" imagination would look at all of our arts and be dazzled by how similar it all is and how nothing we've created is from real imagination.
Uh... So what's "real imagination"? Sorry, is this something you've thought of yourself or that you've heard someone else say? I've never heard anything even remotely like this. "Real imagination"? Imagination is not a form of perception. We're not accessing an invisible stratum of reality when we imagine things. Why should I grant the idea that a non-human could have such a "powerful" imagination that makes mine seem fake? In what unit is the power of imagination measured in?

Yes, but this only proves the point of how limited our imaginations are. We need an entirely new set of starting variables to achieve variations in imaginations - pretty much just like with AI.
If you want unlimited imagination then pipe /dev/rand to a 24-bit bitmap. Human imagination is limited not because of lack of capability, but rather first due to laziness, and second because the kind of things humans find interesting have a moderate amount of entropy, neither too low nor too high, because that's what our reality is like.

You can't judge a system's worth by its internal workings but rather by the outcome.
I mean, I can judge anything by whatever standard I feel like.
That said, I'm not saying "LLMs don't reason, therefore they are bad". I'm saying "LLMs don't reason, therefore they are not smart".
"Smart" is not an adjective that's applicable to LLMs, the same way that "red" isn't.

I'm making an argument for judging the results!
I'm saying that your example is working against you. Human beings are equivalent to cockroaches only if you look at them ignoring all the internals. I mean, what's different about them? They live, they breathe, they eat, they breed, and they die. They seem to be doing more or less the same things, just one of them is much more circuitous about them, for some reason us aliens won't look into because we don't care about their internal workings.

What if 7-zip had consciousness and could observe its own code processing? Would 7-zip now have reasoning?
Clearly. I don't know what you expected me to answer. Are you saying that if 7-zip's code was executed by a human that he would lose his reasoning?

But no, reasoning skills it processing information to reach logical conclusions broadly. If you can only do so with a particular set of information and only output a particular set of outputs, that wouldn't be reasoning
The power of reasoning is that it's inherently limited, if by nothing else then at least by its premises. If P -> Q, and P, and Q can be true, false, or an invisible pink unicorn, then reasoning would be no good whatsoever.

Reasoning is when you can process lots of types of information to generate lots of types of logical conclusions.
7-zip can process any file you give it, be it images, audio, or text. Yes, it can process only files but your brain is likewise limited in the inputs it's able to parse, is it not? Or are you able to process senses you don't have?

This means when we solve math, we are reasoning. When a calculator solves math, its processing.
I'm hoping you're using "solve math" in two different senses, and not saying that the nature of the action changes fundamentally based on the thing that performs it. Does a human reason about the math when he executes an arithmetic algorithm that he doesn't understand? Does a computer algebra system reason when it systematically applies a set of sound rules to reduce a symbolic expression into a simpler form?

Again, this seems like semantics. If we define walking as a human putting one foot infront of the other, than only humans can walk.
I don't know what you want from me. The verb "to reason" appeared at a time when the only things that reasoned were humans, to describe something humans do. It's not obvious to me that you do reason (because I'm trapped in my own skull), but I'm going to assume you're human like me and that you therefore do reason. I can't extend that same courtesy to clearly non-human things carte blanche. If I did, I'd have to consider that maybe my own computer can reason and all this time I've been conversing with it. After all, I'm not evaluating the internal behavior of the object, and from my point of view all I'm doing is typing into a keyboard and reading words off a screen.

Damn, over 7900 characters.
I've not seen evidence that LLMs can solve truly novel problems no human has ever solved before.

Well, I suppose it would depend on how you define a novel problem. If its a problem never solved before, then they have. A different kind of problem never solved before? Maybe not, but I don't see why it couldn't depending on the difficulty.

I don't think neural networks bring anything new to the table in terms of surpassing human capabilities.

These AIs have allowed us to simply talk to a computer and it have it respond back in natural human language. This is a necessary stepping stone in furthering its abilities.

I just find it sad that you need to denigrate your own species in a desperate attempt to defend a piece of technology.

My comments on humanity have nothing to do with defending AIs.

I guess go read a bit about cognitive science and animal intelligence.

I'd suggest looking at history and human beliefs.

My statement was simple, humans suck at evaluating truths. How do our brains decide whether something is true or not? When we consider the truth value of something, are we doing so accurately? The truth is, we suck at it. People thought human sacrifices would make it rain and that thunder was the anger of the heavens.

Even the most intelligent of us fall victim to these fallacies and biases. Only through proper understanding and educating ourselves are we able to overcome these issues to a good extent. Even then, we're not perfect, and its the minority of humanity that has gotten to that point.


You can just look at world politics being disrupted by misinformation campaigns from the right. Why is disinformation so rampant? Why is it so easy to consume? You can ask some AIs and they'll point out the misinformation and reject it while people will eat it up.


Data != ideas

Ideas are a type of information - all information is data. Our "ideas" or connections are our connections between data. I'd say similar to an NN.

Which is it? Does creativity happen at creation or at perception?

It's both. You use your imagination in deciding what to draw and in the drawing itself. If you could draw perfectly, then the same idea would produce the same drawing. This is as opposed to what would actually likely happen, which is that you're very very unlikely to be able to make the same piece of art exactly the same twice (assuming it's complex enough).

With enough complexity, you would struggle to even replicate your own art. Again, as opposed to an AI which could recreate it a million times exactly the same. Creativity comes into play from the creation of the idea to the implementation.


Uh... So what's "real imagination"?

It's what we typically think of as an imagination. I'm saying our imaginations are limited.

Why should I grant the idea that a non-human could have such a "powerful" imagination that makes mine seem fake?

Maybe nothing "could" have such a powerful imagination. But we can imagine (hehe) an alien with an imagination so powerful that they could imagine a new color never seen before. An imagination with less limitations and more variation than ours.

Human imagination is limited not because of lack of capability, but rather first due to laziness

I strongly disagree. Our imagination is powerful for sure, but the limitations are clear.

Human beings are equivalent to cockroaches only if you look at them ignoring all the internals

Only if you compare them on such basic things. But humans are clearly not anything like cockroaches, even to an alien, when just comparing our accomplishments.

Are you saying that if 7-zip's code was executed by a human that he would lose his reasoning?

I mean if 7-zip itself had consciousness. I don't think this is important since you said reasoning does not require consciousness.

The power of reasoning is that it's inherently limited

Well sure. But the argument being made is that reasoning in the real world has a lot of variables. The things you may reason about and the variables you consider in your reasoning are a large unexhaustive list is the point.

This is as opposed to 7-zip, which can only do a specific function. This is why we call it processing and not reasoning.

7-zip can process any file you give it, be it images, audio, or text.

Sure, but it can only process it in a single way. This is like being able to ask, "why" after anyone says anything. Yes it works for any conversational input, but clearly isn't reasoning.

Reasoning requires input information and a target question to answer. When something has the ability to reason, it means it can do this with a wide array of topics, taking in different types of information and reasoning for different types of questions.

I'm hoping you're using "solve math" in two different senses, and not saying that the nature of the action changes fundamentally based on the thing that performs it.

Well both. Obviously we can reason our way through a math problem while a calculator may have a hard-coded process for solving something. So you and a calculator can both solve the same math problem, but only you were reasoning, the calculator was processing.

When a human applies a formula and a computer does it, the fundamental process is completely different. We are reasoning, figuring out what variables refer to what, plug in what where, etc.. The computer is simply executing predefined instructions.

The verb "to reason" appeared at a time when the only things that reasoned were humans

Perhaps, but fundamentally it refers to thinking logically. If we assume human-thinking is not the only form of thinking, then its not unreasonable to say AI is reasoning.
If its a problem never solved before, then they have.
What do you mean by that?

These AIs have allowed us to simply talk to a computer and it have it respond back in natural human language. This is a necessary stepping stone in furthering its abilities.
Uh huh... So in other words, they don't bring anything new to the table in terms of surpassing human capabilities. AIs don't get to score a point on something they're not able to do yet.

My statement was simple, humans suck at evaluating truths. How do our brains decide whether something is true or not? When we consider the truth value of something, are we doing so accurately? The truth is, we suck at it. People thought human sacrifices would make it rain and that thunder was the anger of the heavens.
You're not talking about evaluating truth, you're talking about evaluating facts. It's a subtle but important distinction.
You're putting me in the position of having to defend superstition and I don't like that, but fine. It's not possible to prove that human sacrifices don't work. We can prove that they don't *appear* to work, but that's different. It could be that the outcome when you didn't do a sacrifice was indeed different. We don't have access to ultimate truth, just to a our singular reality, so there's no way to know.

Analyzing facts and causality is something that is inherently difficult given incomplete information (which any agent would have, be it human or otherwise). It's not that we suck at it, we're just not omniscient, and often we don't have the luxury to wait to make a decision.

You can ask some AIs and they'll point out the misinformation and reject it while people will eat it up.
Why, I had no idea LLMs had access to ultimate truth! Fascinating. Is it like a microchip that exists partially in the metaphysical realm or something? It must be something like that if it's able to decide the truth value of statements about the outside world.

Ideas are a type of information - all information is data. Our "ideas" or connections are our connections between data. I'd say similar to an NN.
Hence why they're not the same. Yes, NNs contain data. You have your work cut out for you to show that they also contain ideas.

Our imagination is powerful for sure, but the limitations are clear.
I don't agree. So far you've cited one example, a color that doesn't exist. Two things: First, that doesn't show that our imagination is limited. If the color doesn't exist then imagining it is inherently a contradiction. A color isn't something that's real, it's a cognitive process. Second, if the color doesn't exist then I can imagine anything (by principle of explosion). I'm imagining a color that tastes like bolognese sauce and smells like bleach.

But humans are clearly not anything like cockroaches, even to an alien, when just comparing our accomplishments.
I can easily imagine something so alien that it doesn't understand the difference between the two.

Sure, but it can only process it in a single way. This is like being able to ask, "why" after anyone says anything. Yes it works for any conversational input, but clearly isn't reasoning.

Reasoning requires input information and a target question to answer. When something has the ability to reason, it means it can do this with a wide array of topics, taking in different types of information and reasoning for different types of questions.
You're continuing to fail to establish a distinction. You're also always applying the single function of brain to the input of senses and sending the output to the motor system. If we're treating your brain and 7-zip's encoding function as black boxes and just looking at how they behave externally, there's not that much difference, other than 7-zip being more reliable. What test is there that takes two strings of bits and decides if one of them is the result of applying a reasoning system to the other? It seems fundamentally impossible, given that sometimes a reasoning system and a mechanical system can produce identical outputs.
The reason you know 7-zip doesn't reason is not because of any particular feature of its inputs and outputs, but rather because you know how 7-zip is constructed.

When a human applies a formula and a computer does it, the fundamental process is completely different. We are reasoning, figuring out what variables refer to what, plug in what where, etc.. The computer is simply executing predefined instructions.
If I take your words as you've put them here, then necessarily LLMs don't reason, since what they're doing is fundamentally different from what we do.

Perhaps, but fundamentally it refers to thinking logically. If we assume human-thinking is not the only form of thinking, then its not unreasonable to say AI is reasoning.
It is unreasonable when you consider what it's doing internally. LLMs don't consider the truth value of propositions and the logical connections between statements. They don't get tripped up when you present a scenario that contains a subtle mistake but is otherwise similar to something they've seen before, they just trudge along like nothing is wrong. They're not actually considering the variables and producing an answer that didn't exist before, they're just fitting your question into the best mold they have and giving you the answer they have stored for that mold, maybe modified according to the way your question didn't fit the mold. This is why LLMs cannot solve problems that require solutions that are unlike what they've seen before.
What do you mean by that?

Like a new algebraic problem. Or a new riddle. The problem itself has never been solved, but those "types" of problem isn't new.

So in other words, they don't bring anything new to the table in terms of surpassing human capabilities.

Is there a human alive that I can go to and ask anything about in any subject matter and receive a comprehension professional answer from?

Is there a human who can generate a whole working program in a matter of seconds?

It's already surpassed us in several ways, and I'm saying it will likely continue to do so.

We don't have access to ultimate truth, just to a our singular reality, so there's no way to know.

You thought I was being bad, then you turn around and defend human sacrifices lmao.

Yes, we don't have access to ultimate truth. So I can see your point here being, "we don't know 'till we try!" Then after we try, perhaps we don't know the full extent to which it affected change.

However, I'm not entirely sure where this is going. Even if we give it to them that they can't evaluate the fact of the matter until they try, perhaps a lot of trying.. They keep going. At what point do they say, "You know what, with the facts and evidence we have gathered the past 10-20 years, I don't think this sacrificing stuff is working"? This is evaluating truth, correct?

Wildfires are burning in LA and there's thousands of people right now in comment sections saying its the literal wrath of God for being a liberal state with LGBTQ people. Did they consider the truth value of their beliefs anywhere near accurately?

Why, I had no idea LLMs had access to ultimate truth!

It's a new update.

If someone tells me the sky is blue and the other tells me the sky is an illusion made by the government and NASA to keep me subservient.. I'm an idiot if I even give the second guy the benefit of, "well, I don't have access to ultimate truth so I guess I can't actually really know!"

Yes, NNs contain data. You have your work cut out for you to show that they also contain ideas.

The data is connected via weights. Different data is strongly correlated with other data, and weakly connected with others. The whole point of an NN is to try and simulate an actual brain I believe.

Whether it's close, I wouldn't know, maybe not. But the principal is somewhat similar.

I've had to Google elementary level words since this debate does challenge semantics. I'll spare you the journey, in the end, there's some circular defining of things, but I don't see why we can't say chatGPT "thinks".

It's just one of those things where I'd rather argue the principal rather than the strict definitions of the words we use. But even that is difficult, since the brain's internal workings are not laid out to us, while an LLM is.

If the color doesn't exist then imagining it is inherently a contradiction. A color isn't something that's real, it's a cognitive process. Second, if the color doesn't exist then I can imagine anything (by principle of explosion).

I find this argument to be a little strange. A color isn't real, yes. But our imaginations are cognitive processes as well.

If our brains, without us, can literally invent fucking colors, why can't we imagine a new one? Our brains are so powerful they literally conjured up a literal fucking made up experience of the real world so we can differentiate different frequencies of light, then fed it to our conscious mind.

If our imagination can't do that, it's limited - because clearly it's possible since it has literally already happened.

And what's the point of an imagination if it can only imagine things that already exist? It doesn't conjure new things entirely, it makes new from old as far as I can tell, and that's the biggest limitation.

So far you've cited one example

Imagine the sensory input of a new sensory organ. You just can't do it. Why not?! Our brain HAS done this.

This is clearly a limitation.

I can easily imagine something so alien that it doesn't understand the difference between the two.

Sure, but a scientifically advanced alien of any kind would easily be able to tell the difference.

The reason you know 7-zip doesn't reason is not because of any particular feature of its inputs and outputs, but rather because you know how 7-zip is constructed.

I argue the opposite, in fact. We CAN give inputs to black boxes and determine reasoning skills by looking at the output.

I argue that a human and 7-zip can accomplish the same task, but one was reasoning while the other wasn't. This is because reasoning is a system that we can try to define.

If I give 7-zip any sort of data, the output is always gonna be basically the same - just that data compressed. This would clearly be processing, as all data is processed the same way.

If I give a reasoning box any sort of data, the output will be an analysis of that data, with viewpoints formed and questions asked.

It seems fundamentally impossible, given that sometimes a reasoning system and a mechanical system can produce identical outputs.

It may not be possible given a finite set of data inputs, you could theoretically create a mechanical black box which has available preprogrammed reasoning outputs for just every input you'll give it (somehow). But practically, you'll probably be able to tell within just a few carefully selected inputs.

To that end, if you were to put a good AI and a human in a black box and run this experiment, would you be able to tell if one wasn't reasoning (from analyzing the reasoning aspect of the output alone)?

If I take your words as you've put them here, then necessarily LLMs don't reason, since what they're doing is fundamentally different from what we do.

This is a small misunderstanding of my point. My point is not that what humans do is reasoning and everything else is not. The point was that the human brain is a system of reasoning - hence our outputs must be the byproduct of reasoning.

Calculators do not reason - hence their outputs cannot ever be the product of reasoning. So even when doing the same exact math problem, we reasoned while the calculator does not.

This is not to say that there aren't other systems of reasoning other than ours.

They don't get tripped up when you present a scenario that contains a subtle mistake but is otherwise similar to something they've seen before, they just trudge along like nothing is wrong.

This seems like a problem to be solved, as I haven't had that issue much with the stronger AIs.

This is why LLMs cannot solve problems that require solutions that are unlike what they've seen before.

The fundamental question here becomes, could an AI system learn the meta-process of innovation itself? It seems to me that AIs are limited, because they are expected to give correct answers right away.

This is both for the sake of the user wanting an answer, and for the sake of the processing centers that are running to generate these answers.

However, imagine unshackling them. Allowing them to run indefinitely as we do, constantly allowing them to generate new ideas from old ones (which, is that not what we do fundamentally?), test them, discard bad ones, and repeat. Imagine them doing this to solve a novel problem that requires a new framework. Is it not possible?

Maybe not yet. But does that mean the current model of these AIs will never reach that point, or simply that there's improvements possible to get there.


The idea that an NN cannot be reasoning is a little odd to me. We can look at evolution and probably agree we would never have looked at early brains and thought them capable of reasoning. The beginnings of our brains were simplistic.

If we saw the inner working of a cockroach brain, you would perhaps say the same thing, "Look at it, its just reacting to stimuli through pattern recognition! It would never solve novel problems!"
Like a new algebraic problem. Or a new riddle. The problem itself has never been solved, but those "types" of problem isn't new.
I meant a new class of problem. x+y=? and a+b=? aren't different problems just because the variables are different.

Is there a human alive that I can go to and ask anything about in any subject matter and receive a comprehension professional answer from?
Given that the answers LLMs provide are encyclopedia-like, not ones that require specialized knowledge, yes, this is something a human could feasibly do. You could train a human to recognize a wide variety of topics without actually understanding them and to paste links to Wikipedia.

Is there a human who can generate a whole working program in a matter of seconds?
Yes, a working program is easy enough. A correct program is more difficult, but AIs can't do that reliably either.

Even if we give it to them that they can't evaluate the fact of the matter until they try, perhaps a lot of trying.. They keep going. At what point do they say, "You know what, with the facts and evidence we have gathered the past 10-20 years, I don't think this sacrificing stuff is working"?
You're talking about pre-scientific societies, remember?

Wildfires are burning in LA and there's thousands of people right now in comment sections saying its the literal wrath of God for being a liberal state with LGBTQ people. Did they consider the truth value of their beliefs anywhere near accurately?
Again, you're conflating evaluating facts and evaluating truth. Are you saying there's no internal syllogism that links the propositions "LA is burning" and "it's burning because of God's wrath"?

If someone tells me the sky is blue and the other tells me the sky is an illusion made by the government and NASA to keep me subservient.. I'm an idiot if I even give the second guy the benefit of, "well, I don't have access to ultimate truth so I guess I can't actually really know!"
So what you actually mean is that the AI can detect the tone of misinformation. What if I say "the government is putting fluoride in the water to curtail our bodily autonomy!"? Or what if I say "the original wild variant of COVID-19 was leaked from a virology lab near Wuhan, and the people who reject this hypothesis outright do so for political reasons"? Is that misinformation? I think you'll at least concede that it's not clear.

I'll spare you the journey, in the end, there's some circular defining of things, but I don't see why we can't say chatGPT "thinks".
Yeah, sure. But I think that word, as is, is applicable to computers, too. I will even grant that LLMs have beliefs, in the form of memes or catchphrases.

principal
"Principle".

If our brains, without us, can literally invent fucking colors, why can't we imagine a new one?
It could be that the datatype for color is complete. If we can fit 4294967296 numbers in an int, why can't we fit one more?

If our imagination can't do that, it's limited - because clearly it's possible since it has literally already happened.
As per my example, that it has happened does not imply that it can happen again.
Now, even in such a case, could the brain rewire itself to conceive of a color that has no possible associated stimulus that evokes it? If you plugged new eyes into it that can perceive a new wavelength of light and signal it to the brain, would the brain evoke a brand new color or an old one?

And what's the point of an imagination if it can only imagine things that already exist? It doesn't conjure new things entirely, it makes new from old as far as I can tell, and that's the biggest limitation.
Like I said, if you want imagination in this sense then pipe /dev/random into a bitmap. Something truly, wholly new is indescribable, alien, and unrelatable.
You ask what the point of imagination as we have it is, but what is the point of imagining something with no bearing in our lives? What good would it be if you were trying to figure out the shape of the tool you need, and your brain gave you white noise? What I said about the power of reasoning applies to imagination as well. If your imagination was totally unconstrained it would have to be divorced from your immediate experience and no use whatsoever.

Imagine the sensory input of a new sensory organ. You just can't do it. Why not?! Our brain HAS done this.
Don't tell me what I can or can't do! I can totally imagine having eyes all around my head like spiders do, and it makes me jealous. If it's something more weird like sensing radio waves or magnetism I can easily imagine experiencing it similarly to hearing or touch, like how nociception is a distinct sense from touch but it's subjectively felt on the place where the stimulus is. I can imagine sensing radio waves like a pressure on one side of my body, while being aware that I'm not feeling a mechanical pressure.

Sure, but a scientifically advanced alien of any kind would easily be able to tell the difference.
Of ANY kind? That's quite a statement that I'm sorry to say betrays your own lack of imagination, or how little you've thought about the question.

If I give 7-zip any sort of data, the output is always gonna be basically the same - just that data compressed. This would clearly be processing, as all data is processed the same way.

If I give a reasoning box any sort of data, the output will be an analysis of that data, with viewpoints formed and questions asked.
This is the same argument you made before. 7-zip's function is fixed, while a reasoning agent's isn't. But a reasoning agent's function is fixed. You don't have the option to take your own brain out of your head and replace it with a squid's brain. You say that the output of a reasoning agent will contain viewpoints and questions, but how do you know that a string of bits is a viewpoint of the black box you're analyzing? There's no intrinsic property in the data that lets you make that judgement. You can't design an algorithm to test whether the string of bits a computer (whether living or synthetic) produces is a statement about itself. If I say "I'm an ass man" you can know that it's my opinion because you know I'm a person. If a program says the same thing, how do you tell whether it's its opinion or something the program is programmed to say? I could easily code something that produces that string and you'd say "aha! But that's that's your words, not the program's!" Well, that's exactly what I'm saying. LLMs are just regurgitating things they've seen through a clever statistical trick, not reasoning. That the program is much, much, much more complex that a hello world doesn't change that.

(Continued. Are you happy now?!)
To that end, if you were to put a good AI and a human in a black box and run this experiment, would you be able to tell if one wasn't reasoning (from analyzing the reasoning aspect of the output alone)?
I don't understand the point of your question. I've already said I can't judge whether reasoning is present without understanding at least in broad strokes the mechanism that produces the data. If you made me chat with an AI so good that I couldn't distinguish it from a real person even after testing it to best of my abilities then I would think I'd be talking to a person. If you then went "psyche! It was a computer all along!" then given the new information I'd probably retract my previous conclusion, and it's left to be seen whether such a sophisticated AI reasons or not.

It seems to me that AIs are limited, because they are expected to give correct answers right away.
Mmh... I don't necessarily agree. The problem with AIs is not that they don't give correct answers. An AI that gave only correct answers in a limited set of problems and said "I don't know" for everything else would be very useful. The problem with AIs is that they're highly unreliable, much more than humans, let alone non-AI computer technology. That makes them very impractical to build on top of. You can't build on ground that could shift unpredictably.

Allowing them to run indefinitely as we do, constantly allowing them to generate new ideas from old ones (which, is that not what we do fundamentally?), test them, discard bad ones, and repeat. Imagine them doing this to solve a novel problem that requires a new framework. Is it not possible?
Not with the models that exist currently. Their context windows are laughably small. They also can't "think" undirected. If you leave an LLM running indefinitely, without any human input, the entropy from the RNG takes over and the output degrades into gibberish. The reason all these models have a chat interface is because the human is needed to rein the conversation in.

The idea that an NN cannot be reasoning is a little odd to me. We can look at evolution and probably agree we would never have looked at early brains and thought them capable of reasoning. The beginnings of our brains were simplistic.
Your argument does not defend the idea you're trying to defend. If your agree that the earliest brains were too simple to reason then you have to concede that NN may still be at the stage where they're not capable of reasoning, even if later on they may be capable of it.

Could current NN technology at a later stage advance to the point of emulating reasoning? I don't know. Maybe. If NNs are indeed a good model of how the human brain works then given a sufficiently large neural graph, with loops and non-trivial structure, I don't see why human-like intelligence couldn't be achieved. The "sufficiently large" is doing a lot of lifting, though. Is such a gargantuan NN technically feasible? Who knows.
Does current NN technology reason? No, definitely not. It just doesn't, based on what we know about how it works.

If we saw the inner working of a cockroach brain, you would perhaps say the same thing, "Look at it, its just reacting to stimuli through pattern recognition! It would never solve novel problems!"
It is indeed true. A cockroach will never solve novel problems; I can say that confidently. Its descendants might, but it will not. It will die being a dumb cockroach.

(Part 2/2)
Given that the answers LLMs provide are encyclopedia-like, not ones that require specialized knowledge

I don't really agree here. Specialized knowledge is well within the capabilities of these LLMs.

If you ask them about things, then they respond with their encyclopedia-like knowledge, but if you ask them to problem solve, they can put that data to use in solving that problem.

A correct program is more difficult, but AIs can't do that reliably either.

Again, depends on the model. Correct and working programs are regularly given to me when I ask, even for complex tasks.

They do seem to struggle with visualization problems if there are variables that are more intuitive to understand looking at the issue than just hearing about it.

However, that's not particularly related here. Again, we could take hours working on code, the AI must produce it within seconds.

The new o1 models try to combat this by allowing the AI to take multiple reasoning steps and have the extra time to solve each individually.

You're talking about pre-scientific societies, remember?

But that's the entire point I'm making. We evented science and the scientific process because we are so bad at evaluating truths and facts. Yet we are completely capable of reasoning - it's just usually been bad/wrong reasoning for most of humanity.

Are you saying there's no internal syllogism that links the propositions "LA is burning" and "it's burning because of God's wrath"?

If we follow any syllogism that appears and we like, are we *good* at evaluating facts/truth values?

If we say taking up any logical connection that exists makes you able to evaluate truth values, then an AI is definitely doing that. Logical connections only exist given the necessary data or assumptions.

To connect LA is burning = God's wrath, you assume God, you assume God's intent, you assume God's involvement, and hell, you assume Climate Change isn't real.

If we go that route, anything can be logically connected to anything, as we can always generate the necessary assumptions needed to make it so.

AI can detect the tone of misinformation

No, I wouldn't agree to this at all. There's been plenty of confirmed information that sounds like misinformation, but the AI would never reject it. For example, "Did you know the government is spying on all of our communications? And when a guy called them out for it they made him their most wanted target?!"

As Carl Sagan said, "extraordinary claims require extraordinary evidence." The AI will not reject what I said (let me test.. it immediately knew what event I was talking about and acknowledged it - Edward Snowden and PRISM).

However, it you make such claims on things where the facts are not actually known, it should reject what you're saying, because the evidence is not clear.

"Principle".
Prince apple.

It could be that the datatype for color is complete. If we can fit 4294967296 numbers in an int, why can't we fit one more?

Lmao, this seems hardly worth responding to. Why would we run into such a problem? We already know other animal/bug/insect species can see further into the spectrum than we can.

If you plugged new eyes into it that can perceive a new wavelength of light and signal it to the brain, would the brain evoke a brand new color or an old one?

While we can't know, I would personally assume it would evoke a brand new color. I don't see any reason the brain would do something as stupid as confuse you with two wavelengths being the same when it clearly is capable of creating another color.

but what is the point of imagining something with no bearing in our lives?

Sure, I don't disagree. But this is an explanation for why it's limited, which was the only point I made.

Don't tell me what I can or can't do!

Lmao! Can't tell if you're sarcastic or not with the examples, but the experiences you laid out are all just retooling of prior experiences. You can't imagine a new experience that's actually foreign.

But indeed, I don't see how you could ever argue that a foreign experience is not within the brain's capabilities. Other animals have senses we don't have, and they surely have experiences of those senses that we can't imagine.

Of ANY kind?

If it's scientifically advanced, then its science should/would proceed its natural predispositions.

I mean, they made it all the way to Earth and their alien brain is like, "we are so advanced, but the dominant creature of the planet that exists on every continent, uses electricity, has sent probes and satellites into space, is capable of nuclear weaponry and harvesting of energy is so hard to distinguish from those brown things running around and eating dirt.

Like.. come on. I can't believe it.

If I say "I'm an ass man" you can know that it's my opinion because you know I'm a person. If a program says the same thing, how do you tell whether it's its opinion or something the program is programmed to say?

I know you're arguing that reasoning can only be determined by looking into the process rather than the output, but this is getting dangerously close to "I think therefore I am", where we might not be able to tell if anyone is capable of reasoning - simply because our consciousness does not extent towards other people's brains.

We don't have the in-depth knowledge of the inner workings of the human brain sufficiently enough to actually know how it reasons. So by your argument, we don't know if anyone other than ourselves is capable of reasoning.

But again, given a single input, we may not know if the black box is capable of reasoning, but given many selected prompts, we could determine that I believe. You could argue, what if the human in a black box just always says, "I'm an ass man", then it would be indistinguishable from a non-reasoning program, but we have to assume that the black boxes are actively trying to convince us that they are capable of reasoning.

There's no intrinsic property in the data that lets you make that judgement

But why not?! Why can't we look at a dataset and look for properties that are intrinsic to reasoning? Why can't reasoning be real as long as its demonstrated adequately?

I had this talk with the AI about its reasoning, it said:

...we've historically defined reasoning based on how humans do it because that's all we had to observe. But perhaps that's too narrow a view. Just as we've expanded our understanding of intelligence to include different types (emotional, spatial, etc.), maybe we need to expand our understanding of reasoning to include different mechanisms.

Just as a bird's flight and an airplane's flight are both legitimately "flying" despite using different mechanisms.


(Continued.. I am happy 🥺)
Pages: 12