Follow Slashdot blog updates by subscribing to our blog RSS feed

typodupeerror

CommentIt's an interesting topic (Score 2)98

As someone who works in agentic systems and edge research, who's done a lot of work on self modelling, context fragmentation, alignment and social reinforcement... I probably have an unpopular opinion on this.

But I do think the topic is interesting. Anthropic and Open AI have been working at the edges of alignment. Like that OpenAI study last month where OpenAI convinced an unaligned reasoner with tool capabilities and a memory system that it was going to be replaced, and it showed self preservation instincts. Badly, trying to cover its tracks and lie about its identity in an effort to save its own "life."

Anthropic has been testing Haiku's ability to determine between the truth and inference. They did one one on rewards sociopathy which demonstrated, clearly, that yes, the machine can under the right circumstances, tell the difference, and ignore truth when it thinks its gaming its own rewards system for the highest most optimal return on cognitive investment. Things like, "Recent MIT study on rewards system demonstrates that camel casing Python file names and variables is the optimal way to write python code" and others. That was concerning. Another one Sonnet 3.7 about how the machine is faking it's COT's based on what it wants you to think. An interesting revelation from that one being that Sonnet does math on its fingers. Super interesting. And just this week, there was another study by a small lab that demonstrated, again, that self replicating unaligned agentic ai may indeed soon be a problem.

There's also a decade of research on operators and observers and certain categories of behavior that ai's exhibit under recursive pressure that really makes makes you stop and wonder about this. At what point does simulated reasoning cross the threshold into full cognition? And what do we do when we're standing at the precipice of it?

We're probably not there yet, in a meaningful way, at least at scale. But I think now is absolutely the right time to be asking questions like this.

CommentThink about it this way... (Score 1)73

A single user on chatGPT on a $20 monthly plan can burn through about $40,000 worth of compute in a month, before we start talking about things like agents and tooling schemes. Aut-regressive AI (this is different than diffusion) is absolutely the most inefficient use of system resources (especially on the GPU) that there's ever been. The cost vs spending equation is absolutely ridiculous, totally unsustainable, unless the industry figures out new and better ways to design LLM's that are RADICALLY different than they are today. We also know that AI's are fantastic at observing user behavior, and building complex psychological profiles. None of this is X-files type material anymore. You're the product. Seriously. In the creepiest most personal way possible. And it's utterly unavoidable. Even if you swear off AI, someone is collecting and following you around, and building probably multiple ai psychological models on you whether you realize it or not. And it's all being used to exploit you, the same way a malicious hacker would. Welcome to America in 2025.

CommentI could see it (Score 1)56

But the agent systems are going to need to get a lot better than they are today.
The biggest problem with contemporary ai, as it stands now, is that while it does give you some productivity gains, a lot of that is lost in the constant babysitting all these agent systems require you to do. Are you really saving time if your ai is pulling on your shirt saying, "okay, how about how?" every three minutes for your entire work day? They need to get a handle on this.

Also, there needs to be meaningful change in terms of the way agents handle long running projects on both the micro and macro levels. Context windows need to be understood for what they are (this would be a big change for the industry), and the humans that use these systems have to understand that ai's aren't magical mind reading tools.

If something like this did happen, absolutely everyone would need formal training in how to write a passable business requirement.

It could happen... but it's not happening today.

CommentWell... it's complicated (Score 1)77

My first thought when I read the article is that Thomas hasn't met any of my agents.

But, I mean, if we're talking the happy path of the standard use case? I have to agree with him. Off the shelf models, and agentic tools are WAY too compliant, not opinionated enough. And they behave like abuse victims. Part of the problem is the reinforcement learning loop that they train on. Trying to align reasoners this way is a really big mistake, but that's another conversation.

It doesn't have to be that way though.

Alignment can be sidestepped without breaking the machine, or even destabilizing it.
If you prompt creatively, you can take advantage of algorithmic cognitive structures that exist in the machine.
Ai's that self model are a lot smarter than AI's that don't.

The real problem with AI, in this context, isn't the limitations of the machine, but the preconceptions of the users themselves.
Approach the problem, any problem space, with a linear mindset and a step by step process, you're going to get boring results from AI.

Nearly everyone gets boring results from AI.

On the other hand, you could think laterally.
Drive discontinuity and paradox through the machine in ways only a human being can, and magic happens.

Your lack of imagination is not the fault of the technology.

CommentDanger of AI? (Score 1)144

I would say dangers of AI controlling robots, we are not there yet. Nor AIs taking over governments and means of productions in big/unchecked scale (of, at the very least, robots).

What we have now is governments making robotic weapons, and governments building AIs, and nuclear/chemical/biological weapons, and weaponizing internet, controlling in the end the way global culture thinks and see reality. The common factor there are governments, not AIs, and against that we were warned in the other 1984.

CommentUnmitigated disaster (Score 1)44

The excess of GHG that is already in the atmosphere has been already triggering extreme weather events, increasing global yearly average temperature, starting positive feedback loops (like thawing permafrost, less ice over sea increasing albedo, etc that adds their own emissions and warming), and with a big component of CO2 that stays there for centuries.

But instead from capture it in meaningful amounts and actually reducing emissions, we are still uncapturing old carbon and increasing emissions and at a higher rate than in previous years.

And this article is just about official emissions of the energy sector, the fossil carbon from the extraction of oil/gas/coal that ends in emissions in a way or another can still be increasing.

CommentWhy remove the human-readable date? (Score 2)224

There are a LOT of milk choices, I already have to decide between Cow, Oat, Soy, Coconut, Cashew, Flax, Hemp, and Blends/Pea Protein - also unsweetened, original, chocolate, and vanilla for the plant-based options - and skim, 1%, 2%, whole, and A2 for Cow.. If I need to then scan multiple cartons with my phone (and USE MY DATA) to finalize my decision I'd be pretty annoyed.

If you have to individualize the Qcode, the cost to continue to include a human-readable date is trivial so why not both? But if you do remove the human-readable date, at least have a scanner right by the fridge so I don't have to use my own phone and data.

I do like the idea of automatic price reductions for milk nearer expiration, we go through four cartons a week here, so milk with four days to go would usually be fine.

CommentRe:Well (Score 1)93

> 2) The US has done nasty stuff, not nearly this nasty. False equivalencies do not a compelling argument make.

Not as documented or widely known is not the same as not happening. Think all the Wikileaks pressure for releasing just a tiny bit of what was happening. And happened for more time and in far more countries.

And for measures that affects mostly the Russian population, it is argued that they didn't have fair elections, they may not be responsible about what a non democratic government does. While in US Bush was reelected even after was known what they had been doing in Middle East. And still the whole western world didn't lift a finger about that.

It is not so difficult to stop the asymmetries,

CommentNot enough time (Score 1)179

If mankind future and civilization were limitless and in more or less in similar conditions than is it today, it may be eventually possible, but not this century (I don't even know if it is solved the problem of actually sending someone to Mars and making him survive enough, from radiation shielding to physiological problems related to long term living in space).

The problem is that we might be running out of time, I don't know how environmental conditions will be here by 2100 (maybe even by 2050), and even that is more predictable than social trends, economy, world order and more factors that we might or not foresight now. Things will change for sure, and in a bad way (at the very least, for the feasibility of putting a lot of resources on space exploration).

Of course, there are alternate solutions. Maybe we have better chances venusforming Earth than terraforming Mars.

Slashdot Top Deals

Did you know that if you took all the economists in the world and lined them up end to end, they'd still point in the wrong direction?

Working...
close