AI is something special

Pages: 12
If you then went "psyche! It was a computer all along!" then given the new information I'd probably retract my previous conclusion

But why? We know that if you feel consciousness it must exist, "I think therefore I am", but why is this not true of reasoning? Why can you see reasoning dataset and not conclude it to be such?

If the dataset only contains chess, we can't conclude reasoning because of its limitations. If we see a dataset of only compressed files, we can't conclude reasoning. But if we see a data set with chess, compression, and lots more, paired with complex analyses of situations with multiple variables that can't be preprogrammed ahead of time - why do we then say, "we don't know!" rather than, "this is reasoning".

The reasoning comes from the fact that there's no preprogrammed way to handle the information such that you can get a coherent output without making new logical connections never made before.

And an AI, given a brand new topic of discussion it has never seen before, can, in fact, respond coherently by making new logical connections it has never made before with the brand new information given.

I can see this applying in a way to chess AIs and other game AIs, as they can beat levels and positions they've never seen before, but its limited to a specific task. In this way, I'd be inclined to call it processing more than reasoning. All reasoning requires processing, but not all processing is reasoning.

The problem with AIs is that they're highly unreliable, much more than humans, let alone non-AI computer technology. That makes them very impractical to build on top of. You can't build on ground that could shift unpredictably.

But this isn't true! Humans have always been unreliable. When you say an AI is much more unreliable than humans, you mean the best humans. Imagine this scenario:

You're life is hanging on the answer to a question of medium complexity. Not something of trivia, but something that requires problem solving. You have the option of selecting a random human between the ages of 25-50 and, hell, even what country they're from. OR you can select the most powerful AI available today to solve this question.

If you go with the random human, you're fucked, plain and simple. With the AI, your odds are exponentially higher. Even when weeding out by age and country, you are still looking at a majority of people who are likely too incompetent to be of help to you.

Our history as well is clear on how unreliable we are - believing nonsense, forcing it on others, believing more nonsense, spouting nonsense as if we know it for a fact, rinse and repeat. In this regard, AI is more reliable than humanity as a whole (though again, the best of us would be better, but this could change as AI advances).

Not with the models that exist currently. Their context windows are laughably small.

If we look at a context window for what it really is, it's not dissimilar to our short-term memories. They are meant to be small. However, in AI training, it takes the training data and essentially puts it in its "long-term memory", similarly to us. We don't remember it perfectly, but we have the important information saved.

The reason all these models have a chat interface is because the human is needed to rein the conversation in.

Well yes, but undirected reasoning is still reasoning. This is not an issue of their capabilities, but rather shows how evolution has created our brains with tasks, like survival, and the brain does a good job at staying on those tasks (usually, we don't talk about ADHD I suppose).

Your argument does not defend the idea you're trying to defend. If your agree that the earliest brains were too simple to reason then you have to concede that NN may still be at the stage where they're not capable of reasoning, even if later on they may be capable of it.

Well what I'm really saying is a question of, "at what point does it go from definitely not reasoning to definitely reasoning?" Even if we had internal structures laid out to us, we may still struggle to identify which system can reason and which doesn't. I'm saying we are at least at the doorstep of reasoning with these LLMs, though I don't see why we have to "technically" say they are not.

Even reasoning can be limited, and saying they cannot solve novel problems can simply be an indication of weak reasoning (or weak reasoning as a result of their artificial limitations as discussed earlier), but reasoning all the same.

(Complete)
Last edited on
But that's the entire point I'm making. We evented science and the scientific process because we are so bad at evaluating truths and facts.
But we are not bad at evaluating truth. We are bad at extracting truth from facts, and at looking at things objectively.
"One time I drank a homeopathic remedy and I got cured, therefore homeopathy works." There is nothing logically unsound in that argument. It's empirically very weak, but from a logical standpoint it's perfectly valid. 100% of the times you drank a homeopathic remedy your ailment went away. QED, skeptic.

To connect LA is burning = God's wrath, you assume God, you assume God's intent, you assume God's involvement, and hell, you assume Climate Change isn't real.

If we go that route, anything can be logically connected to anything, as we can always generate the necessary assumptions needed to make it so.
The question is not whether any two propositions can be linked by some logical chain. The question is whether there *is* a logical chain that links these two propositions. I'll grant you that the reasoning is an ex post facto rationalization wrapping their hatred for who they consider immoral people, but that's not the point. The two propositions could equally be linked by no reasoning whatsoever.

1. There are wildfires going on in LA.
2. Frodo is Sam's friend.
3. Harry Potter is Gandalf's pupil.
4. Since #2, #3 is false by modus ponens.
5. Therefore #1 is caused by God's wrath.

This has the format of a logical argument while making no sense whatsoever. LLMs will often make arguments like this one. I'm being hyperbolic, of course, but the point remains.

However, it you make such claims on things where the facts are not actually known, it should reject what you're saying, because the evidence is not clear.
Rejecting a claim of unknown truthfulness as misinformation is just as incorrect as accepting it as fact.
And anyway an LLM that by necessity has only access to hearsay should not be making statements about what is or isn't misinformation. The only facts it has access to is "some people say".

Why would we run into such a problem? We already know other animal/bug/insect species can see further into the spectrum than we can.
Your question was why a human brain can't imagine a new color. Are the neural circuits that process color information in the brain of non-human species identical to the equivalent circuits in the human brain?

While we can't know, I would personally assume it would evoke a brand new color. I don't see any reason the brain would do something as stupid as confuse you with two wavelengths being the same when it clearly is capable of creating another color.
Clearly? Hmm. You speak rather confidently about something no one has ever experienced.
Actually, I kind of lie. There have been several attempts at connecting cameras to the visual cortex as prosthesis with various degrees of success. No new subjective colors so far, as far as I know, but that wasn't the point.

But this is an explanation for why it's limited, which was the only point I made.
But it's not limited in that sense. You can imagine white noise. There, I did it. Why would I do it again? It serves me no purpose. It's not limited, I just have no interest in reaching for the far horizons of entropy when the interesting stuff is much closer to home.

the experiences you laid out are all just retooling of prior experiences. You can't imagine a new experience that's actually foreign.
Okay. You're moving the goalpoasts so that your requirements cannot possibly be met. There's nothing I can tell you I'm imagining and describe in words that you cannot reply with "no, but it has to be something actually foreign". You understand, right, that when I describe a subjective sensation to you, the sensation I'm imagining is not the words I'm telling you? The words are my best effort at conveying something that's inherently subjective and nuanced. You have to trust that when I say "this feels like pressure but different", the experience I'm describing is not actually "just a retooled prior experience". What's retooled is my description, because there are no words that have the semantic power to transmit my subjective experience verbatim from my brain to yours.
If that's not good enough then just accept that I can, in fact, imagine a sensory experience that's different from all the other senses, and if you can't accept that either then the problem is that you don't actually care about discussing this topic, you just want to assert that this is an inherent limitation of imagination, in all humans.

I mean, they made it all the way to Earth and their alien brain is like, "we are so advanced, but the dominant creature of the planet that exists on every continent, uses electricity, has sent probes and satellites into space, is capable of nuclear weaponry and harvesting of energy is so hard to distinguish from those brown things running around and eating dirt.

Like.. come on. I can't believe it.
For example, maybe this species doesn't care about any of that. They have technology, but they don't think it's valuable because they inherited it from a parent species in such vast quantities and so advanced that they don't understand how complex it is or how rare it is in our world. Their biology is also quite different from our own. They don't have individuals, they're just a blob, like slime mold, and they think things that scurry around in large numbers are just gross and want nothing to do with them. They especially don't like things that ooze fluids when poked with sharp objects, unlike them that just adapt to any shape.

I know you're arguing that reasoning can only be determined by looking into the process rather than the output, but this is getting dangerously close to "I think therefore I am", where we might not be able to tell if anyone is capable of reasoning - simply because our consciousness does not extent towards other people's brains.

We don't have the in-depth knowledge of the inner workings of the human brain sufficiently enough to actually know how it reasons. So by your argument, we don't know if anyone other than ourselves is capable of reasoning.
You're just telling me what I believe. Yes, I don't take it for granted that your brain works equivalently to mine. I assume it for practical reasons, but for all I know you're not actually a person. I don't mean you don't have a body, I mean that even if you were standing in front of me I could not say with certainty that your cognitive processes are similar to mine. I will extend you that courtesy, but that's all it is. If you ask me to analyze the situation coldly, rigorously, then no, I can't say for sure. Even if I hacked you up and poked around in your bits to make sure you're not an android I couldn't say. I know I am me, but I can't know how like me you are.

(1/2)

But why not?! Why can't we look at a dataset and look for properties that are intrinsic to reasoning? Why can't reasoning be real as long as its demonstrated adequately?
Because reasoning is a process, not the output of a process. You agree that two different processes can produce the same output for the same input, correct? So imagine we want to develop a test for reasoning. It's a 100-question quiz (chosen at random out of a repertoire of millions), where the more you answer correctly, the more likely it is that you are able to reason. If I hire 100,000 humans to do the quiz to the best of their ability and enter their answers into a database, and their write a program that takes the quiz and answers based on what's on the database, have I created a system that reasons?
Okay, you say. A quiz was silly. Let's write a function that takes an input and an output and gives a yes or no answer on whether the process that produce that output for that input had reasoning (for the sake of argument we'll pretend that this is easy to do). Then you've created an optimization problem. All I need to do is find an algorithm f() that complements your own algorithm g() such that g(input, f(input)) = yes most of the time. Is f() reasoning, just because it's producing an output that's designed to please g()?

When a metric becomes a target, it ceases to be a good metric.

Just as a bird's flight and an airplane's flight are both legitimately "flying" despite using different mechanisms.
Do they, though? Both redirect air to turn thrust into lift. That's the flying part of flying. What's different is how they produce thrust, but not all planes produce thrust by the same means. Jet planes and prop planes work entirely differently, as do helicopters.
A better example would have been birds and rockets.

We know that if you feel consciousness it must exist, "I think therefore I am", but why is this not true of reasoning?
I can observe my own reasoning and my own consciousness, but not other people's.

if we see a data set with chess, compression, and lots more, paired with complex analyses of situations with multiple variables that can't be preprogrammed ahead of time - why do we then say, "we don't know!" rather than, "this is reasoning".
I don't agree that LLMs analyze situations, so a big library of factoids does not a reasoning system make.

The reasoning comes from the fact that there's no preprogrammed way to handle the information such that you can get a coherent output without making new logical connections never made before. And an AI, given a brand new topic of discussion it has never seen before, can, in fact, respond coherently by making new logical connections it has never made before with the brand new information given.
But it doesn't make logical connections never seen before. Like I said, x+y=? and a+b=? are not different problems just because the variables are different. LLMs are kind of good at making analogies between the question you're asking and everything they've seen before. They cannot handle entirely new problems. That is, they pretend to handle them and they spout useless nonsense. Coherent useless nonsense, as you say. It's well articulated, but the answers are not useful. If your problem is actually novel, you won't get a useful answer. "I want to create a transacted file system for Linux that's designed specifically for SSDs and works efficiently with QEMU virtual disks and supports snapshots of individual files. Can you do that for me?" Now, don't get me wrong. I'm not complaining that it can't do it. I'm saying that a human who's reasoning about the question would recognize that even if they understand the design goals, they have no idea what's needed to fulfill them, or say that they refuse to do it because it's an insane amount of work. The AI will try to provide an answer that's entirely useless, though coherent. It's actually funny that it writes a function that initializes a struct and calls it the "initial code" for the file system.

You're life is hanging on the answer to a question of medium complexity. Not something of trivia, but something that requires problem solving. You have the option of selecting a random human between the ages of 25-50 and, hell, even what country they're from. OR you can select the most powerful AI available today to solve this question.
Nah. I'll take my odds with myself.
You have to live your life for a year following someone else's orders at all times. Any time you have to make a decision you must ask what to do, and disobedience is punishable by death. Negotiation is allowed but the final decision is your custodian's. Would you rather be followed by someone who will order you around (for the sake of argument, this person will not try to take advantage of you), or would you rather follow the orders of an LLM of your choosing?

Our history as well is clear on how unreliable we are - believing nonsense, forcing it on others, believing more nonsense, spouting nonsense as if we know it for a fact, rinse and repeat. In this regard, AI is more reliable than humanity as a whole (though again, the best of us would be better, but this could change as AI advances).
Tsk, tsk, tsk. If this is what you believe then you can't trust your faith in LLMs, either.

Nah, man. People are reliable. Sure they make mistakes sometimes, but they're not gross mistakes. A cook in a restaurant won't put tar on your food instead of salt because he got confused.

However, in AI training, it takes the training data and essentially puts it in its "long-term memory", similarly to us.
You're talking about training the model on its own output. That's the worst possible thing you can do to a neural network, because the training set is authoritative. It will eventually rot itself.

Well what I'm really saying is a question of, "at what point does it go from definitely not reasoning to definitely reasoning?"
Like I said, we have to look into what the system's mechanism is like. Look at any symbolic computing system if you want to see actual reasoning. Hell, Ocaml's type inference is more reasoning than what an LLM does.

(2/2)
Last edited on
"One time I drank a homeopathic remedy and I got cured, therefore homeopathy works." There is nothing logically unsound in that argument.

This sounds like a fundamental misunderstanding of logic itself.

Imagine you were to find out that there was an earthquake the day you woke up and felt better. Now, you don't know whether the remedy or the earthquake cured you.

In a perfect world, anything that is logical MUST be factual. However, since we are not omniscient beings, things seem logical to us because we think we've accounted for all the variables. Therefore, practically, things that are logical to us mustn't always be true.

Clearly, if you're a doctor and you know the immune system has a 100% cure rate for disease A and that homeopathic remedies have shown to be of no help at all in curing disease A, then you KNOW the immune system cured patients with disease A, not the homeopathic remedy.

However, for the misguided and uneducated patient, they may proceed to think that the remedy cured them.

So if you're the patient, the logic is sound (what else could have done it!?), but to the doctor, this is invalid logic. This is not to say the doctor couldn't agree that the logic was sound to the patient, but that the doctor himself could never use this logic validly.

The question is not whether any two propositions can be linked by some logical chain. The question is whether there *is* a logical chain that links these two propositions.

I'm arguing this is the same thing.

The homeopathic example is perfect for this, as the patient's logic is ONLY valid if they ASSUME they've accounted for everything else that could possibly help cure them.

This is exactly why we know their logic to be faulty but they do not, we know their assumptions is false.

If we strictly ask whether a logical chain exists between two propositions given only the facts which we know are true, we would have to consistently conclude that we don't know all the facts, hence no logical chain exists between any two propositions.

You found drug X cures disease A 100% of the time vs a 0% survival rate without drug X? It could be that drug X cured them, or it could be that God exists and just loves everyone who takes drug X cause he thinks it's so rad so he cures them himself.


LLMs will often make arguments like this one.

Again, with Claude Sonnet 3.5, I find it does not feel obligated to answer questions it doesn't actually know the answer to, and will let the user know that it doesn't know, or that the answer it's giving may not be entirely correct.

I gave it your bad reasoning example (with LA fires and God's wrath) and it perfectly reasoned out why the reasoning is bad and that misuse of modus ponens.

I can't see how we can call this NOT reasoning. A system of reasoning can make mistakes, it doesn't make it not reasoning.

Rejecting a claim of unknown truthfulness as misinformation is just as incorrect as accepting it as fact.

No, rejecting a claim of unknown truthfulness is the default position in philosophy and science. "There's gold in your backyard, you should dig it up!" You're going to assume there probably isn't - which is practically the same as rejecting it as misinformation.

The only facts it has access to is "some people say".

This isn't really true, as it knows what sources of information are more reliable than others, likely through the trainers.

Even then, the training it goes through now (which is my work right now) is giving it high quality and reliable data.

Even if everything they learned was "hearsay", that's essentially the same as discounting someone's education because we just learned what the professor told us - we didn't run the experiments ourselves!

Your question was why a human brain can't imagine a new color. Are the neural circuits that process color information in the brain of non-human species identical to the equivalent circuits in the human brain?

I assume so. This seems like a very odd point of attack, as all life on Earth originates for each other. There's life that is very close to us on the evolutionary ladder which can see colors we don't.

Our brains cannot possibly be so different suddenly that ours can't invent new colors even though it already does it.

Especially when SO many different species have feeling of senses we don't have and see colors we don't, you should assume their brains did not have to specifically evolve to see every color and feel every sense of every new organ one at a time. It makes MUCH more sense there is some system of inventing these experiences which is accessed as needed.

Of course, speculation, but I don't see why you couldn't see a new color if you had the right receptors and wiring to the brain.

No new subjective colors so far, as far as I know

The point of these things (I assume for this experiment as well) is to try and emulate what we already have, not bring about new conscious experiences.

You can imagine white noise.

At this point, we've all seen white noise. I wonder if someone could have imagined that before it was ever a thing. Even so, this is just imagining white and black, nothing really new here.

If that's not good enough then just accept that I can, in fact, imagine a sensory experience that's different from all the other senses, and if you can't accept that either then the problem is that you don't actually care about discussing this topic, you just want to assert that this is an inherent limitation of imagination, in all humans.

Lmao, I have no idea how you can expect me to just believe you imagined a completely new sensation. You can imagine a 6th sense that allows you to detect the presence of water? You can imagine a new sensation of what it would feel like to be like a beaver and have electroreception?

You can kinda-sorta imagine.. something, but I can pretty much guarantee you didn't actually imagine a new sensation.

It's like Spiderman's spidey-sense, he can't describe the feeling to anyone because no one has ever felt it before. He can describe it to the best of his ability and you can try to imagine it, but you're never actually going to imagine the feeling.

Someone who has never felt pain cannot know what pain feels like.

Someone born with no nerves cannot know what touch feels like.

If they told me they imagined it and now they know, I'm not gonna believe them.

For example, maybe this species doesn't care about any of that.

You should write a book about these aliens, it was fun. But still, I don't see how "not caring" would equate to "can't tell the difference" or "can't tell which species is more dominant".

I know I am me, but I can't know how like me you are.

Give me a wedding ring and you can get to know me for the rest of your life.

The whole point is that we are using our consciousness to justify that we are reasoning simply because that's what we decided to call this weird thinking shit we do in our heads.

But we need a strict definition for what reasoning actually is if we are to extend it to other beings/computers. Is a dog reasoning when it hears a bell and knows that food is coming? Or is it just pattern recognition?

Because reasoning is a process, not the output of a process.

Sure, but the process doesn't have to be what's familiar to you. And since we can't really know when a process is reasoning, we'll have a better time detecting it through the output. Because there's no reason we can define what reasoning is, but not able to define what output reasoning can produce.

(Continued)
Let's write a function that takes an input and an output and gives a yes or no answer on whether the process that produce that output for that input had reasoning

Your scenario doesn't really work, since if the function can tell whether an input/output has reasoning but can also be fooled by a complementary function, then clearly the original function is defectively bad at its job.

The function that detects reasoning must be able to reason as well - or else it's not gonna do a good job. Hence why it wouldn't be a function at all, but a human rather (or AI..?).

When a metric becomes a target, it ceases to be a good metric.

You're pointing our flaws that require active exploitation rather than saying it won't actually work practically.

Do they, though? Both redirect air to turn thrust into lift.

I mean, are we gonna be so nitpicky that we can't agree on whether a bird and a plane flying are different just because the physical principle of their flight is the same?

That's like saying a DC motor and BLDC motor are the same because they both run on electricity and spin. Like sure, but that's not a valid point against someone saying they're not the same.

I don't agree that LLMs analyze situations, so a big library of factoids does not a reasoning system make.

If an LLM was merely a big library of factoids, how can it apply those factoids to new situations and come to new, and correct, conclusions?

Even if the situation presented is similar to what it saw before, a library of factoids cannot adapt. Therefore, these AIs are definitely not just libraries.

The AI will try to provide an answer that's entirely useless

I do wonder what AI you're using, as I gave the prompt to Sonnet and got a pretty good answer. Obviously it didn't write the program, but it gave an brief outline of the code and a strategy that was very reasonable and workable.

It didn't know what was wanted by supporting QEMU virtual disks, but it assumed the prompter had some intentions of optimizing transfers specifically for those virtual disks, leaving the implementation up to you. But it assumed you'd want to control cache, blocking behavior, wear-leveling behavior (which I think the SSD does itself, so this could be a mistake), and support TRIM to optimize for SSDs.

I can't judge to code outline myself too well on how it would fit into Linux, but it looks reasonable (my experience here is coding a CPU-priority algorithm for Linux in a class, but nothing else like this).


Moreover, finding something AIs are bad at doesn't mean they don't reason. This would be like me finding a stubborn old man who thinks the moon landing was fake, presenting rational arguments, fail to change their mind, then concluding they are incapable of reasoning (though maybe I'd be right).

Nah. I'll take my odds with myself.

The idea was you had to pick, you are not an option - even if you knew the answer.

Would you rather be followed by someone who will order you around (for the sake of argument, this person will not try to take advantage of you), or would you rather follow the orders of an LLM of your choosing?

Kinda freaky, do I at least get a strong woman?

Assuming the LLM would also not try to take advantage of me, then why does it matter? If there's no advantage to be had in ordering me around, why would either order me at all?

If I chose an LLM, I'd likely never be ordered to do anything.. Where-as many of us know what it's like to live with our parents and be ordered around like pets and told how to live our lives because we can't be trusted to be adults. My dad still thinks he knows better, despite him constantly being wrong about everything he said that's actually mattered.

Would I rather have to deal with that or an LLM that I can reason with easily and accepts rational arguments? Is this really the crazy choice I have to make?

Nah, man. People are reliable. Sure they make mistakes sometimes, but they're not gross mistakes. A cook in a restaurant won't put tar on your food instead of salt because he got confused.

I just can't think this the way you do.

A cook wouldn't put tar instead of salt because there's SO MUCH MORE than reasoning that would stop them. The sight and smell of tar alone would instinctively force them to not use it in food or have it anywhere near the kitchen, something an AI would obviously be disadvantaged by.

A lot of things we do "reliably" doesn't necessary come from our reasoning skills alone, but rather our instincts built into us. Just because a human wouldn't "miss" when going to grab something (when an AI might) doesn't mean our reasoning is so superior to AI.

But we humans do make grave errors, every day and all the time. We simply expect each other to make errors and put up safeguards so the world doesn't explode. Then when an AI makes an error, we shit on it because our expectations are suddenly so high for the AI.


Moreover, our brains have boxes for things. We have neural connections for cooking related stuff and those connections are rarely going to lead us to using tar instead of salt.

This is not a necessary thing for reasoning to have. Again, poor reasoning or a different system of reasoning (which will have its own benefits/downsides than other systems) doesn't equate to no reasoning.

I'm not even convinced an AI would necessarily make that mistake, but I don't see why a human's flawed reasoning wouldn't lead them to using tar instead of salt. I know they won't because of many things, but their reasoning skills is not on that list.

You're talking about training the model on its own output.

No, I mean when the AI is being trained on data, that data essentially goes into the NN, which we can consider its long-term memory. In this case, the training data would be the world world experiences it would have. Unlike how its trained now, it'll have to decide for itself what information is reliable (should weigh more heavily) and what isn't (should weigh less heavily). Humans tend to suck at that (hence rampant fake news), but I suspect the AI will be much better.

That's the worst possible thing you can do to a neural network, because the training set is authoritative. It will eventually rot itself.

I think we would too if the only thing we could do is constantly think with already acquired data. Eventually, we'd come to strange conclusions and then go crazy.

(Complete)
Our posts seem to be growing linearly in length, if not faster...

Clearly, if you're a doctor and you know the immune system has a 100% cure rate for disease A and that homeopathic remedies have shown to be of no help at all in curing disease A, then you KNOW the immune system cured patients with disease A, not the homeopathic remedy.
This is indeed a more rigorous way to find probable causal links, however I should note that it's fundamentally impossible to deduce a causal link empirically. Just like in your example the earthquake is a confounding factor, in a real experiment there are uncountably many unknown confounding factors that make it impossible to determine the 'why' for things with complete certainty. What I'm getting is that the reasoning employed by a pharmaceutical researcher and that employed by my hypothetical believer in homeopathy are different in degree, not in kind.

A reasoning that would be different in kind would be to start from first principles, deducing the relevant features of two systems and how they must interact together.
That is, only deductive reasoning lets you establish causality. Inductive reasoning will never get you there.

If we strictly ask whether a logical chain exists between two propositions given only the facts which we know are true, we would have to consistently conclude that we don't know all the facts, hence no logical chain exists between any two propositions.

You found drug X cures disease A 100% of the time vs a 0% survival rate without drug X? It could be that drug X cured them, or it could be that God exists and just loves everyone who takes drug X cause he thinks it's so rad so he cures them himself.
I think you're confusing two separate concepts.

What I was referring to when I said "logical chain" was the cognitive process linking the two propositions in someone's mind. There is a logical chain linking "I took a homepathic remedy" to "the homepathic remedy cured me". Even if the patient recognizes their own uncertainty about this hypoythesis, the logical chain is there. In fact, even if the patient denies the conclusion the logical chain is there. The person started from propositions and reached a conclusion. Whether the propositions or the conclusion align with reality doesn't change the fact that the person followed a train of thought to reach their conclusion.

A separate question is whether there is a causal link between taking the remedy and the person being cured. Like we've both said, causal links can never be established empirically. But if you agree that clinical trials are a useful tool to determine the effectiveness of a treatment, you must agree that so is my patient's reasoning. "I took it and I got cured, therefore it cured me" is clearly empiricism in practice. Does it work with barely any data and the conclusion lack almost all rigor? Yes, but at its core it's still empiricism, because the conclusion is based on facts as they are.
A reasoning lacking all empiricism would be "I took it and the bible says it must cure me, therefore it cured me", or even worse "I took it and I didn't get cured, but I still think it works".

No, rejecting a claim of unknown truthfulness is the default position in philosophy and science. "There's gold in your backyard, you should dig it up!" You're going to assume there probably isn't - which is practically the same as rejecting it as misinformation.
I don't think there's a single default position in philosophy. Philosophers haven't been concerned with the truth of things for hundreds of years. Nowadays they just about the validity of arguments.
The default position in science is... well, I guess it depends on who you ask. Personally, I consider myself Popperian; everything is in the "maybe" pile until it's moved to the "false" pile.
As for your example, disbelieving the claim isn't a scientific conclusion, it's a practical conclusion. Digging up your backyard to extract an unknown quantity of gold is an expensive prospect. Your conclusion would likely change drasically if the claim instead was "there's gold in a box in your basement, you should go get it!" instead. You have the same exact amount of evidence for both claims, so why do they elicit different reactions from you?

Separately to this, some claims aren't actionable, yet you might still be compelled to accept, reject, or suspend your judgement on them. I'm pretty sure the Earth is round, though I haven't checked it directly, and although it wouldn't affect me in my daily life to believe that it's flat, I think it would make my worldview less coherent overall.

This isn't really true, as it knows what sources of information are more reliable than others, likely through the trainers.
I guess that would make that metahearsay.

Even if everything they learned was "hearsay", that's essentially the same as discounting someone's education because we just learned what the professor told us - we didn't run the experiments ourselves!
Yeah. If you've never tested something you've learned in the real world, it's hard to argue that you know it, wouldn't you say? It's one thing to, say, be told the speed of light, and another to use that property in a practical application. Or for a more mundane example, a carpenter who has ready every book about carpentry and never been in a workshop isn't much good.

I assume so.
You assume all species that have sight have identical visual processing systems?? Like, the mantis shrimp which sees six different wavelengths of light and can detect polarization has the same visual system as a human?

It makes MUCH more sense there is some system of inventing these experiences which is accessed as needed.
So... Okay. What you're saying is that brains can do literally anything. Neurologically there isn't really anything preventing you from having eyestalks on your arms instead of hairs, and having each eye see a different wavelength as well as tasting a different chemical, and being able to control each eye individually. And in fact, you could do it even if you had the brain of a cockroach. Yeah, sure, it may run a little slower, but electrically it should work out.
I think it makes much more sense to assume that the brain and the body evolve together, and that the brain will be physically unable of developing capabilities the body can't possibly make use of. There's an evolutionary incentive for brains to not be more complex than they need to be, since more complexity means more energy intake needed to live.

Even so, this is just imagining white and black, nothing really new here.
That dismisses literature as a whole, and in fact all art. What is, say, Hamlet, if not just a handful of symbols arranged in a particular sequence?
Furthermore, it dismisses as example anything at all. A new color is not any more novel than a new image made with known colors. Is there any thought you could not dismiss the same way as not wholly new (because it still shares the quality of being a thought)? It sounds like you're saying that imagination is limited because it can't do literally anything. Like, the fact I can't do literal magic is a limitation of imagination.

Lmao, I have no idea how you can expect me to just believe you imagined a completely new sensation.
*Shrug* I don't expect you to believe it, I just wish you spoke with a little less certainty about other people's subjective experiences. People do report subjectively having differently vivid imaginations, with some even reporting not being able to imagine at all.

(1/3)
It's like Spiderman's spidey-sense, he can't describe the feeling to anyone because no one has ever felt it before. He can describe it to the best of his ability and you can try to imagine it, but you're never actually going to imagine the feeling.
Ah-hah. But see, that's something different. Spider-man doesn't exist, so it's not him describing what he's feeling, it's Stan Lee describing what a fictional character with that power might describe about that power. But Stan Lee is no better authority than me on what an actual Spider-man with a spidey sense might feel, because he's never been Spider-man himself. So anything I imagine could very well be what Spider-man feels when his spidey sense tingles.
On a related topic, it's not the same to imagine something brand new than something brand new and also specific. Going back to a previous example, I'll probably never be able to imagine exactly what mantis shrimp see when they see.

Someone who has never felt pain cannot know what pain feels like.

Someone born with no nerves cannot know what touch feels like.

If they told me they imagined it and now they know, I'm not gonna believe them.
Interesting, because all you can observe is their behavior (whether voluntary or involuntary), not their internal states. There's no test you can perform on someone to tell if they have pain, and externally nothing changes between a person being unable to feel pain and then able to imagine it. If you don't believe they're able to imagine it then why did you believe they weren't able to feel it to begin with? More to the point, why do you hold any beliefs at all about their experiences beyond what they report?
If you were a doctor and you got a patient complaining about a pain you could not explain after examining and testing them, would you believe they're lying or telling the truth?

I don't see how "not caring" would equate to "can't tell the difference" or "can't tell which species is more dominant".
Which is more dominant is not an objective measure. An external observer might have a definition of "dominance" that makes them equivalent. As for how indifference translates to non-distinction, if you don't care about something then you're not going to measure it. Hell, it's in the word "indifference"; you don't see a difference.

But we need a strict definition for what reasoning actually is if we are to extend it to other beings/computers. Is a dog reasoning when it hears a bell and knows that food is coming? Or is it just pattern recognition?
Difficult to say. I'm not going to define it here, but I think reasoning is ultimately linked to language and symbols. You take the real world and you abstract it into symbols that you can play with in your head to run scenarios and find solutions to problems. I think a good example of reasoning in animals is crows dropping nuts on crosswalks so they'll get run over and cracked by cars.
* "I want to eat this nut."
* "I need to crack this nut to eat it."
* "Cars break things when they run over them."
* "Intersections have a constant flow of cars, but crosswalks only have an intermittent flow."
* "If I drop this nut on a crosswalk, a car will eventually run over it and then I'll be able to eat it."
No one taught crows to do that (because why would they?) so that's something they definitely pieced together from observing various facts.

Your scenario doesn't really work, since if the function can tell whether an input/output has reasoning but can also be fooled by a complementary function, then clearly the original function is defectively bad at its job.

The function that detects reasoning must be able to reason as well - or else it's not gonna do a good job. Hence why it wouldn't be a function at all, but a human rather (or AI..?).
Ahaha! But a human is still a decision function, plus some non-determinism. Given the same inputs, it'll produce the same output most of the time. That means it's still possible to devise an optimized function that only seeks to trick humans without actually reasoning.
So are LLMs a form of an optimized function, or are they reasoning? Given that parts of their training involve putting humans in the loop to see how much they like a model's responses, I'd say they're definitely the former.

I mean, are we gonna be so nitpicky that we can't agree on whether a bird and a plane flying are different just because the physical principle of their flight is the same?

That's like saying a DC motor and BLDC motor are the same because they both run on electricity and spin. Like sure, but that's not a valid point against someone saying they're not the same.
But we're not talking about birds and planes being the same or different, we're talking about whether the word "flight" as applied to each one means something different. But you weren't even the one who said this. It was just an aside I felt like commenting on, so feel free to ignore this.

I do wonder what AI you're using, as I gave the prompt to Sonnet and got a pretty good answer. Obviously it didn't write the program, but it gave an brief outline of the code and a strategy that was very reasonable and workable.

It didn't know what was wanted by supporting QEMU virtual disks, but it assumed the prompter had some intentions of optimizing transfers specifically for those virtual disks, leaving the implementation up to you. But it assumed you'd want to control cache, blocking behavior, wear-leveling behavior (which I think the SSD does itself, so this could be a mistake), and support TRIM to optimize for SSDs.
That's all very broad strokes, and the kind of thing I'd expect from a forumite who has no idea about the problem but just feels like they need to say something. Like you say, wear-leveling is done at the hardware level, it's just that it had no idea how a file system can be tuned to work specifically with SSDs (because as far as I know no one does). What I'd expect an expert to say is to bring the design goals down into specific requirements and guarantees, and perhaps link to bibliography and to existing implementations of similar technology, if any exists. If applicable, they'd point out if some goals are contradictory.
The answer you got is not substantially better than the one ChatGPT gave me. ChatGPT just had more balls to commit and try to code something, but that's it. The correct answer for both is still "I don't know how to do that".

But we humans do make grave errors, every day and all the time.
A grave error is not the same as a gross error. Driving and getting distracted by your phone for a split second right when something on the road demands your attention and crashing is a grave but subtle error. Honking instead of shifting a gear is a gross error, but probably inconsequential.
I'm not using the different ways in which humans and LLMs make errors as evidence that LLMs don't reason, I'm using it to argue how LLMs are not as useful as they seem. LLMs make gross errors in the answers they give, so that makes them unreliable, so that makes them less useful. If you have some system that has humans in the loop and you replace them with an LLM, the reliability will go down. If that's a problem for the use case then that's a way in which LLMs are not useful.

(2/3)
In this case, the training data would be the world world experiences it would have.
Hmm...
However, imagine unshackling them. Allowing them to run indefinitely as we do, constantly allowing them to generate new ideas from old ones (which, is that not what we do fundamentally?), test them, discard bad ones, and repeat. Imagine them doing this to solve a novel problem that requires a new framework. Is it not possible?
Are you talking about plugging an NN into a robot body, or perhaps a virtual environment? Yeah, maybe. In fact it's been done with simplified problems, such as finding the best way to bunny-hop in Quake. No one knows how to train a general problem-solving model in a reasonable time, though. Like, you want to make the best debugging bot, so you give it control of a mouse and keyboard and let it watch a screen, what do you need to expose it to, to train it? Is it even possible to learn to debug by just watching a screen, with no additional context?

(3/3)
Registered users can post here. Sign in or register to post.
Pages: 12