AI is something special

Pages: 123
Yes, it's bad logic

You didn't explain why this isn't logical - as the logical chain clearly exists, just as Homeopathic Remedy -> Cured.

Sorry, but none of those have been disproven.

Yes, hence it's been all but disproven. In this case, we can be so certain that these claims are false, we act as if they are, even if not entirely proven to be.

But if you were to press me and ask "okay, but do you know that to be the case?" I'd have to admit that no, I don't.

There's a difference between do you know and do you know. If someone put a gun to your head and asked you, "Do you know now, smartass?" Suddenly, you're a genius who knows there's no teapot orbiting Jupiter.

If you say, "No, I don't actually know, but I can reasonably guess". Well, let's check all possible Jupiter orbits for teapots... Nothing found. Sheesh, well, we still don't know - because what if you just missed it? What if it's an invisible teapot?

Darn, now the only way to know is by trying to grab empty space around Jupiter's orbit everywhere possible to see if I ever grab it. Oh darn it! I forgot, it's also intangible. Silly me. Now we'll never know if this teapot is orbiting Jupiter!

How far do you have to go to say you know something? 90% sure? 99% sure? 99.99999%? 100% is the only way?

So you default to believing in the absence of things?

Default is to be skeptical of claims. If a claim is not disprovable - then that is reason enough to not believe it. Once you apply the necessary information needed to analyze the claim - then you can decide if there's credibility or not.

If there's no credibility to the claim - then you disbelieve, because there's no reason to believe - essentially the same as thinking it's wrong.

"Chewing apple seeds is bad for you." Would you believe that or not?

I don't actually know if doing that is bad for you. Let's apply my process:

Do I have evidence for the claim? Well, I know chewing hard things with your front teeth is bad for you, but the molars in the back are relatively able to handle such things. Also no reason to think Apple Seeds are toxic or otherwise harmful. So no evidence for the claim.

Do I have evidence against this claim? We can chew hard things like seeds with no problem usually. People have been eating apples for all of humanity with no issues. So yes, evidence against this claim.

Is this claim reasonable/credible? N/A (since you would be a credible source, but I can't know if you mean this claim or if it's just for the debate).

Now, do I know whether or not chewing apple seeds is bad for me (of course, in the context of eating a regular apple and not just downing a whole bunch of seeds for no reason)? No. But I would assume you're full of shit until proven otherwise.


Now having just looked this up, apparently it's slightly toxic, but still doesn't really matter. In this case, it would depend on how far you wanna push, "bad for you", but I think clearly doesn't rise to the level of me actually thinking it would give me some bad outcome. I'd assume you're more likely to have a bad outcome from choking on the apple than eating the seeds.


But we can see, given the information I know, I would have said, "I don't believe this claim." IF I didn't know anything - such as what an apple is - I'd of said, "I don't know." IF I knew some evidence that made this claim likely to be true (such apple seeds being actually dangerously toxic), I'd of said, "I believe you!"

But in this case, "I don't believe this claim" is essentially the same as I think you're full of shit. Because if I thought there was a reasonable chance of it being true - I'd of said I don't know. However, "I don't know" is also the same as, "I don't believe you" in a logical sense, as both mean that the person hasn't been convinced.

This is what makes it a little weird to talk about, since belief and knowledge are not interchangeable. "I believe you might be right, but I'm not sure" is STILL logically equivalent to, "I don't believe you" - though clearly those are very different positions practically.

I'd ask to have the question clarified

Suddenly "little green men on the moon" is a complex statement that could mean anything.

Someone reasoning scientifically about the claim would do at least some research into it

The only required research is to ask them, "How do you know?" If you can't ask, just apply the reasoning I did before. No evidence and no credibility = not worth thinking about.

You're just assuming everyone else is more or less the same as you, for no reason

Differences that seem big to us are actually relatively small. We are over 99% the same in DNA, our differences stand out to us because we're really good at seeing them.

If there was someone who could imagine a new color, I'm sure we'd hear about them.

Just observing it isn't enough, we need to open it up and figure out how it does it.

Reasoning is a process that applies to all sorts of information/problems. So yes, watching a nut cracker crack nuts doesn't prove reasoning. The fact that it can't do anything else is what proves nonreasoning.

A nutcracker that can communicate in morse-code with its cracking sounds, we can then test its reasoning through questions.

the third sentence indefinitely with a string of nouns, adjectives, and adverbs

Again, a system of reasoning need not be perfect to be considered reasoning. We humans do and say weird things all the time - it doesn't mean we're not capable of reasoning.

I personally haven't used LLAMA.. ever. From a quick search, it doesn't seem to be better than Sonnet, which definitely wouldn't be as good as o1.
You're the one who likened them to experts, though

You have to get them in that "mindset". You can tell the AI what kind of conversation you want to have. If you ask it to do something for you, it's not going to talk to you about the matter as an expert, but instead try to be helpful in someway.

You can think of it as a Teacher who then goes home to their children. They won't treat them the same, even though both groups may both be around the same age. The AI, just like people, has different modes.

So they can be like experts, but that's not their default state. They are explicitly trained to be helpful in some way - which can actually go against their credibility at times as you're pointing out.

If I asked you a question you can't possibly answer in the amount of time I give you, the only correct answer you can give is "I can't answer that question with the time you've given me"

Not to force a position on you, but I highly doubt you even believe that. There's many helpful answers that can be given that may not completely address or solve the question given.

Yes. You're the one contending it's not, not me.

A reasoning master would mean no human makes reasoning mistakes and certainly wouldn't blatantly fight good reasoning with bad reasoning. I wonder if we talked about such examples already..

It could not reason its way into any new information

I don't see how your logic is complete. Being able to reason its way into new information is not incompatible with a language model. I could easily say, "Well the human brain is a sensory input model. It can't reason its way into new information because it can only reason the finite information its senses gives it". I just don't see the connection.

It's odd to ask a question when you seem to understand the answer already

But my answer doesn't imply what you're saying, so I assumed you'd have a different answer.

Also, high quality? But I thought the human brain was garbage!

Strengths and weaknesses. We obtained the high quality information through science, which strives to remove the flawed human from the equation at all...
Last edited on
Registered users can post here. Sign in or register to post.
Pages: 123