CommentRe:"user friendliness" (Score 1, Troll)214
Suppose Linux runs an application which tries to read/write from a file on a case insensitive third party source system.
The application itself shouldn't need to know what kind of filesystem is used by the third party (If you disagree, imagine thousands of applications which all have special code for every possible third party). So the responsibility for doing the right thing properly lies with the OS, which means the kernel too since it handles system calls.
If Linus is right, then the kernel should treat all filenames as a stream of bytes. Good idea? Not really. Suppose the source system writes the filename in two different ways (maybe two apps are writing the same file, or one app has been upgraded, etc). This is allowed according to source system rules.
Now the Linux application reads the file, and the kernel acts as if these are two different files. You can see there will be unintended consequences and subtle semantic bugs.
So Linus is wrong. The kernel should know enough about third party filesystem conventions so that it can do the right thing. That includes knowing when two filenames are treated as equivalent. If not, the OS is abdicating its responsibilities towards the applications.
And yes,it's messy for the kernel. But better that it's done once in the kernel than that it's done badly again and again, in every application.
CommentRe:Yoda and... Rutte? (Score 1)31
"But if he had an accent, or it's really hard to understand what he's saying, they focus on what he's saying."
Elle be beck!
CommentRe: Start with fungus (Score 1)98
Your suggestion of a hierarchy of rights based on a classification of the natural world is intellectually clean and consistent. That's great as far as it goes, but it cannot map to the reality of what humans have done in the past, present, and likely in the future.
CommentRe:It's an interesting topic (Score 1)98
Like that OpenAI study last month where OpenAI convinced an unaligned reasoner with tool capabilities and a memory system that it was going to be replaced, and it showed self preservation instincts. Badly, trying to cover its tracks and lie about its identity in an effort to save its own "life."
I think repeating that kind of narrative is dangerous, as it will confuse your own thinking. You're repeating unsupported claims of analogized behaviour that exist purely in the paper authors' minds.
In science, it's very important to distinguish facts (the AI software is generating outputs that break imaginary barriers that only exist in the experimenter's mind) versus interpretation (the AI is deliberately trying to break out of a shared conceptual framework with the experimenter that should constrain its behaviour).
The difficulty with current systems is that they are so interactive, it's very difficult to recognize how much of the "intelligence" and "agency" is actually input by the human interlocutor. It's a variation on the Clever Hans problem.
CommentRe:Start with fungus (Score 1)98
CommentRe:Read John Searle: Brains make minds (Score 1)98
The data input is human behavior. The data output looks like human behavior.
Hey! That's what happens when you stand in front of a mirror!
Time to give human rights to mirrors, who's with me?
CommentRe:Process? (Score 1)163
CommentRe: Not what the narrative says (Score 1, Insightful)143
CommentRe:I bailed out on Google years ago (Score 1)49
(Oh, and probably drive in front of a blue screen, it's less dangerous)
(an old PC in the backseat _may_ work too)
CommentRe:Frustrating but,,, (Score 0)39
CommentRe:Fix the actual problem! (Score 1)97
Good on you for mentioning the audio quality problem, let me mention the other elephant in the room: the picture quality of most modern TV shows and movies is abysmal. Way too dark.
A picture is worth a thousand words.
What I'd like is automatic brightness normalization. The more details I can actually see in a scene, the less audio cues I need to piece together the action. Try to keep shadows to 20% of the picture area, automatically. I'm trying to watch a film, not a radio show.
CommentRe:Oh what a chance! (Score 1)76
Generative AI translates the Cobol code into readable specifications and software developers take it from there (with or without the infamous Vibe coding).
If you don't understand what's wrong with your proposal then you shouldn't go near one of these systems. If you were making a sarcastic joke then you deserve an upvote (sorry I have no modpoints).
CommentRe:Meh (Score 2)98
Just a small correction. The LLM doesn't generate the most plausible answer. It generates the most likely answer according to the bias in the training sets.
A "plausible" answer is an answer that is likely to be true. A training set such as all the comments on a fantasy forum is not a source of truth, just a source of conversations. Its bias is towards magic and dragons and medieval technology. Therefore, the most likely answer to some question like "why is the sky blue?" will reference magic. That is not a plausible answer, but a likely one.
(pedantry completed: you made it out alive!)
CommentRe:If you had 200 interns (Score 2)56
AI agents are dangerous in human society because 1) they are not rational and cannot follow instructions but only fill in the blanks statistically, and 2) they are immune to punishment for misconduct and legal responsibility.
The biggest benefit of having a human "manage" a team of "AI agents" is that the human is legally responsible for the "team".
This is a tried and true solution to issue 2) above: Historically, human beings have managed teams of slaves (a slave is not legally a person) and groups of animals (an animal has no human rights) in fields, on work sites, for transport, etc. In all cases, if the animal or slave caused damage to property or life, the team owner or leader was on the hook for punishment and restitution.