Friday Squid Blogging: A New Explanation of Squid Camouflage

New research:

An associate professor of chemistry and chemical biology at Northeastern University, Deravi’s recently published paper in the Journal of Materials Chemistry C sheds new light on how squid use organs that essentially function as organic solar cells to help power their camouflage abilities.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on March 21, 2025 at 4:30 PM34 Comments

Comments

J C. March 21, 2025 6:58 PM

Brute force decryption against ransomware Akira:

https://tinyhack.com/2025/03/13/decrypting-encrypted-files-from-akira-ransomware-linux-esxi-variant-2024-using-a-bunch-of-gpus/

The important part is the ransomware uses keys generated from “predictable” seeds (process times). I found it interesting they used Yarrow to generate the random keys (but I imagine without the proper reseeding).

As one of the designers of Yarrow, I’m curious about your thoughts. Did the ransomware writers use it as intended or do you see any obvious error (apart from using too little entropy)?

Jacob March 22, 2025 2:45 PM

Chromatophores are pigmented organs that sit all over the squid’s skin. They have muscle fibers on the outside that are filled with neurons, allowing the animal to neuromuscularly open and control these pigment sacks based on what’s in their environment.

It’s cool how cephalopod neural processing is distributed all throughout the body, not just in a single “brain”.
Like a decentralised internet protocol.

not important March 22, 2025 3:43 PM

‘We use them every day’: In some parts of the US, the clack of typewriter keys can still be
heard
https://www.bbc.com/future/article/20250321-the-people-who-still-use-typewriters

=…his colleagues still use them to type up cheques and fill in legal forms to ensure the details on those documents are legible.

!!!Plus, there’s a security angle. It’s very hard to hack a typewriter since they are not connected to the internet. In 2013, jaw-dropping details emerged about the extent of US intelligence agency surveillance programmes. This prompted the Russian Federal Guard Service (FSO) to revert to typewriters in an attempt to evade eavesdropping. German officials were also reported to be considering a similar move in 2014. (During the Cold War, Soviet spies actually developed techniques for snooping on electric typewriter activity, a form of “keylogging” technology – where the keystrokes inputted on a keyboard are captured. US operatives also reconstructed text from typewriter ribbons – meaning that even typewriters aren’t completely safe.)

!!!He runs Typewriters.com and, despite a decline in sales in recent decades, he still shifts four or five electric IBM typewriters every week.
“I just sold 12 to a prison that’s
putting them in the library because they don’t let prisoners use computers,” he says.

Funeral homes, some of which use typewriters to compile death certificates, are also
regular clients.

It’s hard to feed these complex forms into a computer printer so that information gets
printed onto them in exactly the right places. So, says Lundy, the warehouse workers prefer to insert the form into a typewriter and type it up by sight instead.=

Clive Robinson March 22, 2025 6:04 PM

@ not important, ALL,

“This prompted the Russian Federal Guard Service (FSO) to revert to typewriters in an attempt to evade eavesdropping.”

Ahh the fun of it… Actually each key on an old mechanical typewriter makes a near unique noise due to the differing lengths of the pull bars[1].

Even I can hear it on my mechanical typewriters which are both over fifty years old and still work well enough for me to use three sheets of paper and a couple of carbon papers[2] (which are darn difficult to get hold of at a sensible price these days).

The advantage they have is “no data to disk” which I also have with the 80col dot matrix Epson printer I have.

As noted the ribbons especially on “golfball typewriters” have a very accurate record of every character you type as well as word spacing. But the ribbons are easy to lift off the printer/typewriter and lock in a safe or feed the ribbon through a fine cut paper shredder or just burn it in a grate. But don’t try to shred cabon paper the words “it’s real messy” does not do it justice…

Interesting fact, manual typewriters have a completely different “typing style” to computer keyboards and if you are not used to it you get muscle cramps in the shoulders as well as the forearms.

Back in the 1970’s it was still before “Visual Display Units”(VDU) started replacing mechanical motor driven teletypes[3] which lasted in the military untill the mid 1990’s because the “TEMPEST” units even the later ones made by Trend were eye-wateringly expensive.

I started out on the KSR and ASR teletypes in the 1970’s and they could really “shake a room”. You also developed shoulders like a American football or rugby player or for the young ladies golfer/swimmer. The first VDU’s I actually used were at collage and they were a delight to use but actually made programming harder as you only got around twenty useful lines on a screen where as you could get a whole programme to “roll out” on a teletype. Oh and the joy of “punched paper tape” not just for “storage but actual editing as well… I still have my Satellite tracking programme and it’s world map file in a tobacco tin and several other programmes including a “Mastermind” game and programs I developed to design basic electrical circuits and antennas that output the likes of Smith Charts. Back then developing such programmes got you “competition awards” as well as Diplomas, in part because CAD had not been invented 🙂

[1] The print mechanism on a “manual typewriter” is relatively simple…

The key is on an L shaped “rocker” that converts the downward press into a backwards pull. This moves the “pull bar” –often actually a wire– backwards. It is attached to the bottom of the print arm or strike/type bar that has the letter hammer strike/type head braised on the end of it at a fairly precise set of angles. The arrangement has “mechanical advantage” in that the key travel might be a third to half an inch, but the character hammer may well move upto four inches in an arc to gain the necessary velocity to “strike through” the ribbon and multiple layers of type paper and carbon paper.

I could describe how the rest of the near separate mechanisms work, but unless you have a need to repair or maintain a typewriter your eyes would probably “glaze over” 😉

[2] The term “carbon paper” gave rise to “carbon copy” which is where the “CC” you still see in Email clients comes from.

Whilst you can use each one several times, they are not cheap at ~$1/sheet these days. There are tricks you can do with some types and Isopropyl to get more life out of them. Similar tricks can be done with typewriter ribbons including “re-inking”.

However if you are careful and know what you are doing you can make a “carbon paper” with cooking “rice paper” or similar and old style wax “shoe polish” and paraffin wax or just “oil paint” and paraffin wax. It is generally easier and less expensive to get fan fold two or three ply “listing paper” for a “dot matrix line printer” and it’s a heck of a lot less messy.

[3] For those that wonder why Unix calls terminals tty it’s short for teletype (which is actually a “Trade Name” but like “Hoover” became generic).

Clive Robinson March 23, 2025 6:45 AM

@ Bruce, ALL,

Just how inaccurate is current AI?

Since what are oft called “Hallucinations” are an inbuilt feature of current AI LLM and ML systems by the foundational way they work.

And with the owners of such systems trying to hide the Hallucinations with what are called “guide rails” or other human correction.

Answering the question is problematic as the normal rules of scientific and engineering testing don’t apply due to the continuous owner intervention to “Polish the turd” or in other ways “Put a gloss on it”.

One way has been to,

“Always ask ‘New Questions’ to ‘stay ahead’ of the human intervention.”

But this has not just limitations you can not show other trends etc.

So another way to stay ahead of human intervention is to,

“Ask standard questions about constantly changing information input.”

One way to do this is with asking questions about “The News”.

The UK “British Broadcasting Corp”(BBC) has legal obligations about the factuality of their reporting. So they decided to test current AI LLM and ML systems a little while back.

To say the results were “not exactly encouraging” would as they say “be an under statement”…

That is with errors getting on for 2/3rds of the time and over half the time for all systems tested, with some over 9/10ths of the time… It is quite frankly not what you want.

So others have repeated the experiment / test and show equally dismal results,

https://www.techdirt.com/2025/03/20/tow-center-study-again-shows-ai-sucks-at-conveying-accurate-news/

I won’t “spoil the surprise” with by giving the results of the TOW center study you can read that yourself.

But a question arises,

“Given just how bad all current AI LLM and ML systems tested were repeatedly, on quite simple tasks, are the really ‘fit for purpose’?”

Or any purpose for that matter… Because even AI “Expert Systems” from the 1980’s were way more than significantly better than the current crop of publicly facing systems.

So why are the results so dire, well many will say different things but consider the work flow of the building of Expert Systems back in the 1980’s that still continues to give good results today.

In effect the “input data” was “curated for accuracy” by a “human expert” then other “human experts” in effect “distilled out” the pertinent facts and built decision trees to get from vague observations to specific facts and diagnosis.

There are two essentials in that,

1, Trusted good input facts.
2, Trusted good decision logic.

These are both absent in current AI LLM and ML systems

“By design from the outset.”

Because the aim was to replace the expert humans in both steps and get an algorithm to do it by randomised probability or if you prefer

1, Add noise to the information input –corpus– and average the result in a low pass filter.
2, Use the filtered output as a feedback signal to build a “probability graph”.

Then with user input,

3, Use the filtered output with further noise added to make a selection from the probability graph.

This sounds crazy but it’s actually a variation of a standard technique in science and engineering and is fundamental to thermodynamics and for the same reason information theory and so control theory as well.

There is however a “hidden gotcha” in this and it’s to do with the probability of the “noise”. For such a system to work, the noise has to have certain characteristics. If the noise is wrong then obviously as it’s fundamentally integral to the way the system works it’s going to effect the system and it’s output.

The noise in the input corpus is generally very much wrong as it’s in effect based on “Human cognitive bias”. Likewise the bias of user input.

In “Expert Systems” it is the “human experts” that debias and in many otherwise clean up the noise such that it’s either removed or of the right form.

Because of the way current AI LLM and ML systems were fundamentally designed there is no “Expert” in the system…

Which is why the owners are having to pay sweat-shop rates to non expert humans to try and clean up the mess with “guide rails” and the like.

The problem is those humans are not “experts” either…

Eventually when the hype dies down about current AI LLM and ML systems, people will realise that they entirely lack intelligence, thus can not be “expert” in any way.

“Will this change?”

Is a difficult question to answer. But we do know that a fundemental part of “Expert” is a finely tuned observation that requires “context” and “agency”.

Current AI LLM and ML systems lack both and can not acquire them unless the context is highly restricted and rules based and purely informational with a large enough “clean / unbiased” input corpus.

So games like Chess and Go are within the capability. But Billiards, Snooker, or Pool, “NO”.

Unless we add agency via transducers added to “see object location” and “adjust que position angles and force”. Then it can run “test after test” as it now has “agency” and by “observation” and averaging out noise pull out the signals that will allow “context” to be built.

But this raises a couple of other questions,

1, Is the entire world knowledge need as an input corpus?
2, Do we need massive DNN LLM’s with Transformer based ML?

The answer to both is already known and it’s “NO”.

Which leaves a third question,

3, Is intelligence required to be an expert?

Well… aside from the perennial,

“How do you define intelligence issue”

Again it’s “context” based and so the answer is in the most cases we know of,

“Because they are “bounded and rules based thus amenable to observation and test,”

Agency allows the context to be built by observation, and test by just statistics… So,

“NO”.

Clive Robinson March 23, 2025 8:01 AM

@ ALL

What are AI Hallucinations?

Well I can give a long explanation but It’s easier to say “Go Google”.

But that would be,

1, Lazy
2, Unwise

The second due to Google and other search engines now using / being hallucinating AI…

That said go look at,

https://www.ibm.com/think/topics/ai-hallucinations

Not the words but the picture at the top.

You see two women standing in what is probably a foyer with glass behind them in a publicly accessable building on probably the ground floor in what is possibly the UK. They are obviously posed for the photograph as the “PR elements” are clearly present.

Much of this you get with just a glance from “context” you have built-in via your long term interaction in the urban/city environments you have day to day immersive experience of having “lived it” for quite some time.

Now a thought experiment to explain AI hallucination…

You in effect ask an AI,

“Where and who are these people?”

It comes back with,

“A mother and child at a birthday party”

You immediately go “What?” or similar in your mind. Thus decide it’s one of those mindless AI Hallucinations that get thrown up by random.

Actually it’s not random… It’s based on corpus data and found patterns.

Without going into too much detail you can see how it happens.

In the corpus data is photographs of girls birthday parties. In girls birthday parties the people are predominately mothers and their daughters. Recently “my little kitty” and similar have been themes to “dress up to” this includes head bands with “kitty ears” and flowers in the hair.

Now look at the woman on the left, she has glasses on top of hear head and visible behind that but out of focus as well as low resolution is a green emergency escape sign in green.

The glasses have the same outline as a “Kittens ears headband” and is where you would expect such a head band to be. Likewise the green sign is where you might find a flower in the hair. Importantly it’s on the woman’s lefthand side, which is where you would expect a right handed mother to put it.

Due to the way tokenisation happens we know that this mistake has very high probability of happening. And in fact similar has happened with pictures of wales with low clouds behind them.

We also know there is the dark tone of skin issues, one of which is AI’s have trouble with it. They do things like assume they are in shade and thus misread visual ques that indicate things like genetics and age. It’s no secret that photographers tell older people to look down to look younger and younger people to look up to look older. The woman on the left has darker skin is looking down and is taller, the woman on the left is lighter skin, looking up and shorter.

To the AI the woman on the left is actually effectively much older.

Now… current AI LLM systems are in effect “difference engines” that in effect treat oddities as signals to make choices by. An initial step is find difference and use them to in effect set “a context”. Once in a context common details are viewed through it.

So the big difference is what is on or appears on the woman on the lefts head…

This sets the “girls birthday party context”.

The difference in skin tone and body hights and positions gives a perceived age difference that is too great.

Both of the mistakes,

1, Head adornment identification
2, Age difference identification

Arise due to “tokenisation” of the image…

The rest follows by probability based on what is in the input corpus and how it has been “tagged” or “vectorised”.

In short the AI context was wrong and at quite a variance to the context many humans would see.

Clive Robinson March 23, 2025 9:14 AM

@ Bruce, ALL,

Potential major genetics privacy failure

“23andMe” looks like it is about to go “belly up” and thus all the information and samples it currently holds will become the property of the “Receivers and Creditors”.

US law is a bit complex and varies from place to place, but as a norm it’s best to consider that all “contractual obligations” of the company with customers are dissolved.

That is though the results and samples are “yours” whilst the company exists and the contract indicates that. Come it’s demise, they are nolonger yours but an asset to be sold at the highest price obtainable.

Which in probability means you loose control for good and it ends up with a data broker or similar to be sold over and over and over.

So as things are slightly different in California their Attorney General Robert Bonta has issued official notice and indications of what should be done by Californian consumers to try and prevent their data and samples being lost from their control,

https://oag.ca.gov/news/press-releases/attorney-general-bonta-urgently-issues-consumer-alert-23andme-customers

Some of this still applies even if you are not a Californian Resident.

Remember with “genetic data” it’s not just you as an individual it effects but all your biological family past, present, and future.

not important March 23, 2025 7:02 PM

@Clive’s post
https://www.schneier.com/blog/archives/2025/03/friday-squid-blogging-a-new-explanation-of-squid-camouflage.html/#comment-443875
privacy related

The man with a mind-reading chip in his brain – thanks to Elon Musk
https://www.bbc.com/news/articles/cewk49j7j1po

=The Neuralink chip looks to restore a fraction of his previous independence, by allowing him to control a computer with his mind.

It is what is known as a brain computer interface (BCI) – which works by detecting the tiny electrical impulses generated when humans think about moving, and translating these into digital command, such as moving a cursor on a screen.

!!!”One of the main problems is privacy,” said Anil Seth, Professor of Neuroscience, University of Sussex.

“So if we are exporting our brain activity […] then we are kind of allowing access to not just what we do but potentially what we think, what we believe and what we feel,” he told the BBC.

“Once you’ve got access to stuff inside your head, there really is no other barrier to personal privacy left.”=

Do You see Thought Police on steroids as the possible next step in a future?

Thank you for Your input on typewriters.

Clive Robinson March 24, 2025 3:14 AM

@ not important, ALL

Re : Thought Privacy under technology

You ask,

“Do You see Thought Police on steroids as the possible next step in a future?”

Sone have already looked at trying to use “Fast MRI” as a “lie detector” and I’ve commented in the past about the “Southpaw problem” of those who are left handed. That is “lefties brains are not wired up right” so they generally do not get included in the related research. Which is going to cause all sorts of issues down the line.

But more relatedly,

“Yes I did back last century and talked about it quite a bit at the time.”

But people were more interested in Y2K back then and research opportunities were not in Human Brain Interface. The only research was like that carried out by Microsoft, called HCI back then but these days UI-Design is where it’s gone…

As I’ve mentioned before the most people I shocked with it at one time was in the summer of 2000 at an EU funded program on Crypto and Security held at the University of Upsala in Stockholm (same time as when the “Tall Ships” sailed in, but that’s another story).

I gave a presentation/talk and in it I mentioned “insider threat actors faking it as outsider attacks to hide their tracks” which in the “any questions” went right off topic and a question from the lady[1] who was starting the PETS Conferences was about “interfaces and biometrics” with respect to Security / Privacy. As I knew a fair bit about the related fields of robotics, medical electronics, AI, and communications having at that time spent about half a career length working in them (1980’s onwards). I gave an in-depth answer, which I ended with the comment,

“The day Bill Gates requires a 9pin DIN[2] connector in the back of your head, is definitely the day I retire!”

So yes I’d long seen the danger at that point, and finally now as it’s becoming a reality other people are as well.

So a quarter century of “lost time” where we could have been planing and legislating thoughtfully rather than by the necessity “Of boots hitting the ground”.

I can see the same “loss of time” happening with AI research and products, and the way it’s going it will not end well…

[1] Simone Fischer-Hübner now at Karlstad University,

https://www.kau.se/forskare/simone-fischer-hubner

With 20-20 Hindsight I wish I’d taken up her invite to do a presentation at the first PETS. However at that time I saw my academic and career progression being in a somewhat different direction as I was having trouble trying to find a PhD Reader who would let me do what I wanted to research rather than be their under paid lab rat for two to four years.

[2] back last century the standard connector for “user interface devices” was the very inexpensive and easily available but large was the 9-Pin DIN connector. It transitioned through the “minitire DIN” to USB-A and now due to EU legislation about phone chargers and the like and “Waste Electrical and Electronic Equipment”(WEEE) to USB-C.

Clive Robinson March 24, 2025 4:46 AM

@ Bruce, ALL,

Update on,

Potential major genetics privacy failure

https://www.reuters.com/business/healthcare-pharmaceuticals/dna-testing-firm-23andme-files-chapter-11-bankruptcy-sell-itself-2025-03-24/

I’ve no idea what the board are upto or thinking, but with Chairman Mark Jensen saying,

“After a thorough evaluation of strategic alternatives, we have determined that a court-supervised sale process is the best path forward to maximize the value of the business”

I suspect either their sanity, integrity, or both.

fib March 24, 2025 3:38 PM

Salutations all. Have you heard the news?

‘https://www.theatlantic.com/politics/archive/2025/03/trump-administration-accidentally-texted-me-its-war-plans/682151/

Grima Squeakersen March 24, 2025 6:10 PM

@fib I don’t use Signal, but have a little familiarity with its cli interface. Seems like one of the intended participants got careless & included Goldberg in the chat group, either by name/pseudonym or phone #. I suspect that the “friendlier” (relaxed and quasi-informal) that secure communications are made to appear, the higher the probability for this kind of user error becomes. A higher bar of difficulty seems to tend to make one more focused.

Clive Robinson March 24, 2025 6:40 PM

@ fib, Grima Squeakersen, ALL,

Re : Silent participant in Secure Messaging Apps.

The author of the Atlantic article may or may not have been aware of a known and very serious flaw in both WhatsApp and Signal and actually most other “group” communications with alleged E2EE.

From memory it’s what the failed vote in France was actually all about.

To be “efficient” most if not all secure communications systems that support groups have “oblivious key sharing” (or in Telegram they turn encryption off). So the potential security advantage you get with E2EE in 2party communications is completely lost with more than 2party or “group” conversations.

I’ve made this known in the past, in particular about “Video Conferencing” during lockdown where I went into the details as to,

“Why keys had to be shared in a group or conference.”

We know this is known to UK SigInt agency GCHQ so we can also assume it’s known to all Western Government SigInt agencies that are part of “one of the eyes groups”. And likewise in all first and second world countries not just for Government agencies but Law Enforcement and a number of corporate entities supplying surveillance services to anyone who can meet their price…

All the SigInt organisation needs to do is “add a silent member to the group” then they get to see everything…

It’s another form of “See What You See”(SWYS) attack. This time at “The KeyMat” level as opposed to the “message level”.

Oh and there is another SWYS level that happens at the “KeyMan” and “KeyGen” levels. To see what it is have a think about how the 2party KeyGen in these secure messaging apps actually works…

A clue to think of one of the easier methods… “A ratchet might only turn one way, but as long as I can start my ratchet before yours…”

And people wonder why I don’t use “Secure Messaging Apps”…

Clive Robinson March 24, 2025 10:04 PM

@ Bruce, ALL,

CO2 laser detects radiation ten times further than particles travel.

This has some interesting implications for those that want a new aspect to Active EmSec and EW ECM techniques,

“Conventional radiation detectors, such as Geiger counters, detect particles that are emitted by the radioactive material, typically limiting their operational range to the material’s direct vicinity. The new method, developed by a research team headed up at the University of Maryland, instead leverages the ionization in the surrounding air, enabling detection from much greater distances.”

https://physicsworld.com/a/co2-laser-enables-long-range-detection-of-radioactive-material/

But think a little further many other things cause ionisation or other changes to the air around them. All of which create an effect such as reflection, refraction, antenuation.

In the past I’ve mentioned “red eye” or 180 degree internal reflection detection techniques which is the principle “cats eyes” in the road work on.

In particular a trick that can tell the difference between a human or other living creatures eye ball and the silicon sensor in a video or SLR camera. Where both have 180 degree reflection, but living eyes reflect a different frequency spectrum to that of silicon thus a security system can tell if a camera rather than a casual eyeball is pointing it’s way.

Likewise I’ve mentioned that all lengths of wire that conducts act like an antenna, which likewise reflects or absorbs and has a frequency response. Thus you can tell if someone has a radio transmitter or receiver in your vicinity that is not even turned on and even it’s equipment type.

I’ve also mentioned that gas discharge lights act like conductors because the gas inside is ionised and the older style act as a consequence like antennas.

But it does not have to be ionisation, it just requires a difference in density. We’ve all seen “heat shimmer” off of a flame or hot “black-top” road and some of us have even seen mirages. All local differences in temprature will convect up and create differences in density that can be detected.

In the past I’ve mentioned “air lenses” where you take a metal pipe and heat it up. The air in the pipe is at a different density. And if you spin the pipe you shape the change in density to the equivalent of a “rod lense”. Which is something usefull to know if you are developing a high energy laser to shoot down drones.

Put simply any process that puts energy into the environment or absorbs it from it will effect the environment around it.

In the past I’ve mentioned how to use a thermal imaging camera to detect electronics like bugs buried in walls etc because of the difference created. Further how turning the room temperature up and down due to the different thermal capacities will show objects up even if they are not powered up.

Using a laser to “extend the range” of such detection is bot just a fun idea but eminently practical for detecting what others don’t want you knowing about.

Spencer March 25, 2025 3:33 AM

I predict that Signal gets a nice user bump from the Atlantic article. It’s not a coincidence that top brass use signal and not Whatsapp for their personal chats, particularly with all the legislation about backdoors under discussion.

BTW, am I correct in assuming that discussing classified operations on signal is a major security violation, even without adding reporters to the chat? I guess the DOD must have some official chat program, but that it’s unbearable to use.

lurker March 25, 2025 4:57 AM

Gentle Readers may wish to compare the cavalier attitude of people in high places to national security, with that of the manager of a tiny Massachussetts utility Co. who told the FBI to eff off when they offered to email him a link to click on … Oh, and as well he sacked their managed services provider.

‘https://www.theregister.com/2025/03/12/volt_tyhoon_experience_interview_with_gm

Eriadilos March 25, 2025 6:45 AM

@not important @Clive

Re : Neuralink and its potential problems

I think that with such invasive systems, obsolecence is also a very important thing to take into account :
– what if the company goes bankrupt (cf. https://spectrum.ieee.org/bionic-eye-obsolete) ?
– what if a software update breaks your device ? go fast and break things has serious consequences in the medical field but it could be hard to understand for some decision making folks
– how is end of life and end of support handled for those ? If the external hardware module needs to be replaced is there a guarantee that spare parts are available ?

Clive Robinson March 25, 2025 6:59 AM

@ Spencer,

Re : Questions about Signal

“BTW, am I correct in assuming that discussing classified operations on signal is a major security violation, even without adding reporters to the chat? I guess the DOD must have some official chat program, but that it’s unbearable to use.”

The use depends…

1, On what the Commander in Chief or his chosen appointees decide.
2, If it can be shown to be or made secure.

On the first, have a look back to Obama and his “Blackberry” and “Trump the first time around with his mobile phone”.

They commanded, others scurried.

On the second go back in time to pre cold war and a couple of millennium before that.

People had two basic choices in war for communications “Covert and slow”, “Fast but overt”. The first “by hand” using a courier or soldier on foot or horseback. The second being by smoke, lights, flags and later telegraph then radio.

Obviously couriers could be intercepted or waylaid, and smoke and flags etc could be seen by anyone within line of sight, and with telegraph and radio even further.

Thus cryptography was necessarily used for both.

So if you treat Signal and all other supposed Secure Message Apps as “a public broadcast medium” when doing “group chats” then you would need to send information you had securely encrypted outside of the app for it to be secure… Which kind of makes the app superfluous (and just one reason I don’t use them, another being the “silent participant” issue and several others including “putting a target on your back”).

Fine if you are the Commander in chief “pissing out of the boat” but in a “Might is right” world you as an individual etc are never going to be on a boat that can not be quickly sunk…

As for a DOD App, who knows or cares we know it’s been considered several times in the past.

But whilst if one does exist, and may be unbearable to use –especially if actually secure– that would not be the reason not to use it.

Most of these people are little more than “children” when it comes to practical security and thus they do stupid things for the thrill of it or think they are being clever.

They behaved like Russian nihilist plotters from the turn of the 19th century. Without having learnt from nearly a century and a half of subsequent history you can find out about online and in college level history books.

Because the could not perceive the potential threats, they had no “field craft” and their “OpSec” was lousy and I suspect their use of Signal was actually well known to several SigInt Agencies. Because it would be almost trivial to find out.

Clive Robinson March 25, 2025 7:24 AM

@ lurker,

Re The Register Article

I very nearly fell of my chair with laughing so much.

The guy actually did almost everything right, but in a way that says oh so much about his character.

But more seriously it also shows up an aspect that nearly every one sufferers from,

“He was not paranoid enough”

His “Why Us?” question and reasoning shows a fundamental misunderstanding of the world “on the other side of the looking glass”.

Of the alleged attackers it’s been said they are not employed by the state, and they only get paid minimally on results.

If true this engenders a certain mentality of

“Any rice in your bowl is better than none.”

So even “low hanging fruit” are “fair game” in a “numbers game” which is how cyber-crime which cyber-espionage is functions.

Especially in a “Target Rich Environment”, where no matter how bad your security is you will not get attacked unless your number comes up, or you some how become a target by your own actions or others requirements.

The hand holding the purse strings would have given a loose set of requirements and those actually attacking probably had no knowledge and like as not did not care. The utility company matched the set of requirements so no matter what it’s actual strategic or pecuniary value it got hit and owned.

In part this was because their “managed service provider” made them an easy target thus easy money in the bank very low hanging fruit that got bitten.

Clive Robinson March 25, 2025 7:56 AM

@ Eriadilos, ALL,

Re : Implanted Medical Devices.

I have some experience with designing medical electronics going back into the 1970’s and at various times since.

Your points are all valid and you can add a couple more,

1, The security of such devices is effectively non existent or at best by obscurity.
2, Regulatory requirements are usually to onerous to even consider patching, upgrading, or even defect fixing.

Even when it’s external electronics that is both battery and mains powered these issues still apply.

At one point the use of various Microsoft OS’s was popular as it made fancy displays by general consumer/commercial application industry programmers rather than experienced specialists easy thus much lower cost…

Remember the “worm that nearly took out the world” and all the hurumphing about the UK NHS having so many out of date OS computers?

Well a big chunk of them were medical electronics that were effectively abandoned by their manufacturers because of regulatory costs of patching and upgrading.

Hospitals are vulnerable to attacks because many of these medical devices are “networked” and for reasons that still get under my collar and chaff the UK NHS Authority want’s all those networks connected up to the “NHS Backbone” that I knew in the early 2000’s was easily accessable from the Internet (it goes back to the idiocy of then UK Prime Minister Tony Blair and his big project ideas that wasted many billions and sadly still does).

Oh and many of those external medical devices use(d) “JSON” and the like that those who constructed them really had no idea of how to use safely or securely.

Clive Robinson March 25, 2025 5:07 PM

@ Bruce, ALL

“AI Bedazzle, Beguile, Bewitch, Befriend, and BETRAY plan progress”

Now at “BETRAY?”

I pointed out early on that the current AI LLM and ML system hype was in effect hiding a surveillance nightmare for everyone. Because the likes of Alphabet/Google and Microsoft had a “Privacy Stealing Business Plan” for these AI systems use. I noted it had the basic stages of,

“Bedazzle, Beguile, Bewitch, Befriend, and BETRAY”

Well it’s been turbulent and lots of people are at different points along that curve right now. But that is about to change because of “AI based Assistants”.

In part because Google and Microsoft are unhappy because we are all not at the BETRAY stage. Which starts for them the “payback and Profit” of the billions that have been pushed into the hype of current AI LLM and ML systems. The fact that most of that “investment” money has come from unrelated investors that will soon be “shirtless” is so much the richer for Google and Microsoft they don’t want competition on what they see as their rightful return. And Microsoft in particular are forcing “AI with everything” in their new and up coming products. But also others are getting in the AI Game to try and “get a slice of the action”.

Now some will think I’ve kind of fallen off of my perch with excess paranoia, but history suggests otherwise. And if you doubt, then can I suggest you have a read of someone elses thoughts on the matter,

https://www.theregister.com/2025/03/25/generative_ai_browser_extensions_privacy/

“A group of computer scientists from University of California, Davis in the USA, Mediterranea University of Reggio Calabria in Italy, University College London in the UK, and Universidad Carlos III de Madrid in Spain set out to analyze the privacy practices of browser extensions that use AI to do things like produce summaries of webpages or answer questions about content.”

These “AI extensions” or “Agents” are what both Google and Microsoft are forcing on people to now “search the Web” because they have both majorly contributed to what Cory Doctro calls “Enshitification” as a process that has rendered the search engines of both Google and Microsoft near useless.

The aim of Google, Microsoft, and others is to own your most closely held privacy for them to make profit by.

But others have the same intent with their “Agents”, as the researchers have indicated and the article notes,

“Despite the use of familiar terms like ChatGPT, Google, and Copilot in the titles of these extensions, the makers of these extensions are unaffiliated with Google, Microsoft, or OpenAI.”

So a fair degree of “Cheese Stealing” going on in the AI Agent Game.

But importantly,

“Generative AI assistants packaged up as browser extensions harvest personal data with minimal safeguards, [the] researchers warn.

Some of these extensions may violate their own [published] privacy commitments and potentially run afoul of US [and EU] regulations”

Any way I urge you to go away and read it with an OpenMind.

P.S. Mandatory disclosure : As I have mentioned in the past I have associations with two of the included Academic Institutions, as well as others working in similar research.

This however does not mean I currently have “skin in the AI game” for the commonsense reason I like my shirts and want to keep them as I assume most others do.

But more importantly in the late 1990’s and early 2000’s I was working for an academic research database search organisation. And did research work on how to make “information on the Internet” make sensible money. Even back then a quarter of a century ago it was clear that in general people were not going to pay because there were way way to many information resources available, and so it’s turned out. I’m continuously amused by Rupert “the bear faced lier” Murdoch desperately trying to make his failing newspaper empire pay directly from readers and well failing. He and his children did not get it then and I don’t see real evidence they’ve changed since (so the “Equus Necrium Flagitious” continues).

lurker March 25, 2025 5:42 PM

@Clive Robinson

re: techdirt-tow-center-study and
re: ai-browser extensions

Just looking at the quantitative results of the Tow research, there appears to be a special feature in Copilot where it doesn’t (is incapable of?) make stuff up, but simply refuses to answer. OTOH Grok shows its parentage in the amount of stuff it makes up, and outright steals from elsewhere. Could this have anything to do with its ability to speak Hindi street slang? [1]

Anyway. I’m not the target market for those browser extensions. I don’t want some one else’s summary. I want to see the wrinkles the summarisers smooth over, before they start doing other things.

[1] ‘https://www.bbc.com/news/articles/cd65p1pv8pdo

Clive Robinson March 25, 2025 10:54 PM

@ lurker,

With regards,

“Anyway. I’m not the target market for those browser extensions.”

Ahh… You’ve just revealed something 😉

There is a saying,

“By the time your forty you’ve learnt to use the tools you use.”[1]

The implication is not that you are necessarily skilled in them, but bad as the tool and usage might be it’s going to be faster to end results than learning a new tool no matter what you might get from it.

That said in the ICT Industry a lot of things move rapidly then die. So learning new tools is something we tend to have to do…

My advice when ever I’ve been asked by those who are “still bright eyed and bushy tailed” is still the same over forty years after I first gave it,

“Learn two things well. Firstly learn the foundations well. Secondly learn how to build on them efficiently and quickly.”

Usually the smart ones won’t just “nod along” they will ask questions that kind of fall into two categories and the answers are effectively,

“If you understand the first principles you can always build up to something new. But if you don’t then if something changes you have nothing to draw on to gain understanding.”

This is something that is really bad in industry, “The damagers” want people who use their tools and only their tools that way they can do one job only. In the past they were taught by “Sitting next to Nellie” and do things at “piece rates”.

The second type of question is in effect “Why efficiently and quickly, not quickly and efficiently?”

To which the short answer is,

“You have to be lazy to get things done faster and more reliably over all.”

If you take time to make a process efficient by say ‘building a script’ the ‘right way’ the result is efficient even though you might have done the job five times over in the same time. However the subsequent time savings quickly build up and repay you many times over, and with a lot less errors.

Also if you write a script the “right way” it should be easy to change, and act as a skeleton for the next script you write, thus saving you time[2].

The secret is learning to view the world and the way things work the right way as quickly as possible. Because that can buy you the one thing there is never enough of to waste, and that’s time.

A good piece of advice I was once given was,

“Never write code for it to be reusable, instead write code that is easy to change for reuse.”

The difference is lost on a lot of Open Source developers that think to be useful they have to put in “The kitchen sink”… They don’t, they should only put in the hook points in a way so someone else can add a sink that meets their needs if they need to.

One of the failings of C++ is the STL it is usually written the wrong way thus the equivalent of thousands or tens of thousands of lines of code get dragged in. Mostly needlessly, “Just in case”.

[1] It’s kind of a variation on the older,

“When you get to forty you have the face you disserve.”

And has similar implications of lack of change to,

“You can’t teach an old dog new tricks.”

[2] A classic example of this is “duplicating media” back in the days of tapes and floppy drives they were frequently slow to copy from a read drive to a write drive. The reason was the issue of “block size”. If you read in by the “data block size” you then had to wait for the disk to go around to get the next data block. However make your program so it has a buffer that can hold an entire track/cylinder, then you save a lot of time and wear on the drives. You then tell the read drive to change cylinder whilst you push the buffer out to the write drive. If you carry on in this “salt and pepper” way you can save over 80% of the time to duplicate a disk. If you think about the fact that different disks have different data block and sectors per track and different numbers of tracks as you build your script/program making changes to accommodate new media and drives is almost trivial and takes just moments.

Who? March 26, 2025 5:46 AM

@ ALL

I need help hardening the UEFI BIOS on a few computers that will not receive more updates from the manufacturer. I know the NSA CSI and CTR documents related to hardening BIOS and follow them as carefully as possible. My question is related with the concepts of “adjacent network (A)”, and “network (N)” metrics in the CVSS scoring system, as outlined here:

hxxps://www.first.org/cvss/v2/guide

As I understand it, adjacent network access requires some access to the device at OSI level 2 while network access requires access at OSI level 3. Can we protect against these attack vectors by disabling TCP/IP stack in the BIOS, PXE and WoL?

Yes, I know features like ME/AMT have a bad reputation but hope it being disabled when buying a system that is labeled as “ME disabled” or disabling this engine in the BIOS settings. I hope it is enough to get them disabled (even if not in a permanent way for the latter) as no one in the security community has discovered something that denies it, up to my knowledge, yet.

So, returning to the main question… how can we protect against the (A) and (N) attack vectors? Can disabling anything the allows the firmware to talk to the network help?

Thanks you!

Who? March 26, 2025 6:07 AM

@ Clive Robinson

About AI hallucinations

Hi Clive.

In my humble opinion the key to understand AI hallucinations is given, indirectly, on the IBM URL you noted before. We read about the use of those hallucinations as a creativity source for artists and designers. I think this one is the key.

Current AI systems (LLMs) lack “common sense.” They are unable to differentiate what is a key element on an image or text from what is accessory. In other words, there is no way to provide a different weight to a central element on a picture than to a element provided in, we say, a TV display on the background of the same image.

Once they are able to put all those elements in context, hallucinations will be solved.

It is the same problem we have with information retrieved from us by government agencies or corporations like Alphabet and Meta. This information is dangerous because it lacks context, so it is very easy to misunderstand. Same happens to current AI technologies.

Who? March 26, 2025 6:12 AM

To be more precise about the relation between AI hallucinations as dream-like sources of information… when we dream, we lack those “protection mechanisms” that make things in context. This one is the reason dreams are so odd sometimes. Our brain has no the “locks” provided by reasoning, so it runs without restrictions imposed by our reality.

Same happens with hallucinations. Without the required locks provided by what we call “common sense,” AI systems can do anything with the data provided when training them.

LLMs should run more on “BSD” and less on “LSD”; but common sense is something that has to be solved yet on our current AI developments.

Clive Robinson March 26, 2025 7:26 AM

@ Who?, ALL,

With regards,

“I need help hardening the UEFI BIOS on a few computers that will not receive more updates from the manufacturer.”

The simple answer is,

“replace old with new”

Or more specifically “unsupported with supported”. I assume as it’s something I have to live with going back 40years that you have,

“An impediment to upgrade other than cost.”

In my case it’s manufacturing equipment that cannot be replaced… that have interfaces that are nolonger supported (a silly example so people get the idea would be say a long bed cutting plotter that only has a centronics interface driven by very custom software).

Usually the first step in any “upgrade” is to,

“audit functional usage and analyse it”

This can help you find “peace wise solutions” to issues. In your case by eliminating the need for these computers to have “direct access” to networks that are not segregated. This might be as simple as building an old fashioned “application firewall”.

Because the basic security steps are,

1, Upgrade
2, Patch
3, Mitigate
4, Eliminate

You indicate that the (2) is nolonger possible, and I’m assuming that likewise (1) is not possible because you cannot do (4).

This leaves only (3) the “mitigation route” untill you eventually have to do (1) due to the effects of entropy on hardware.

So in effect you are only buying an uncertain degree of time.

If I’m reading your post correctly your primary desire is “security” from attackers that are “external” rather than “internal”

The hard mitigation for this is “energy gapping” or less difficulty but also less securely “air gapping”. To in effect put these systems in an “impenetrable bubble” so attackers can not reach them.

But information systems at the lowest level do three things,

1, Process information.
2, Store information.
3, Communicate information.

Fundamentally to (1) process information it needs a “CPU” or equivalent that in turn requires the other two all be it as “Memory on the Bus”. Likewise to (2) store information even within the CPU the information store needs to (3) communicate the information. And if you want to get data in or results out you need to (3) communicate information.

Which means that unless you can put the entire system into the “impenetrable bubble” you are going to need “gap crossing” that is limited to only “allowed by policy” communications.

Further where the gap is crossed the information needs to be in the simplest possible format (but no simpler than absolutely needed).

This is because you need to “verify, check and audit” what crosses the gap. That is you also need to instrument the crossing in a way that can not be seen or changed by an attacker.

I’ve mentioned in the past how I go about doing such things and it’s lengthy so I won’t detail it here just give a “Cliff’s Notes” 20,000ft overview.

In effect all information is expanded and converted to a very simple format (like ASCII). If “control is included” all commands are “hard coded and minimal” such that there is the minimum of choice or flexibility.

That is you reduce “the redundancy” to the minimum because this denies attackers the ability to establish their own “covert channel” or to vary from what is “allowed by policy”.

Yes it’s a pain to work with because it lacks “flexibility” as it has “minimal complexity” and in many ways it will be seen as “inefficient” but as it removes “corner and edge cases” it’s usually a price worth paying to “effectively meet security requirements”.

Who? March 26, 2025 11:47 AM

@ Clive Robinson, ALL

Indeed, I want to work on the third step: mitigation. I just want to know if a BIOS can be hardened enough to be considered reasonably secure against external threats. I do not care about insiders (in most cases, only two or three trusted persons), only about outsiders.

I am most worried about the perimeter network; our networks are firewalled in a way only one external port it listening for a single UDP service (wg(4)), and does not respond to network traffic except when it has been processed with the right cryptographic secret.

My worry is not only outdated UEFI firmware on the perimeter network, but also firmware that has unknown (or known, but unpublished) vulnerabilities, so want to minimize the attack surface even for systems whose UEFI BIOS firmware is up to date.

All these systems run OpenBSD, in most cases -current but sometimes a stable branch too.

Indeed, replacing them would be the right fix; in the real world, however, it is not a choice we have in all cases. Most of my systems are not, and never were, connected to the Internet. They are not energy gapped, it is too expensive, but at least they are air gapped. Some systems, however, need to have some sort of Internet access, even if it is a limited one.

Anonymous March 26, 2025 3:34 PM

I thought it fitting to post about these two marvels, one for RF geeks/hackers, one for security.

An interesting Linux distribution for SDR users!

=== DragonOS

DragonOS is a Lubuntu-based desktop distribution which is focused on software defined radio (SDR). The distribution provides a pre-installed suite of the most powerful and accessible open source SDR software. DragonOS has verified support for a range of inexpensive and powerful SDR hardware, including RTL-SDR, HackRF One, LimeSDR, BladeRF, and others. – quote source

Some reviews, features, and tutorials are located here!

OpenBSD LiveCD? OH, MY!

=== Fuguita

You may have never heard of FuguIta, but it’s time you have!

OpenBSD LiveCDs have come and go, but FuguIta has been going strong for 20 years!

FuguIta is an OpenBSD live CD featuring portable workplace, low hardware requirements, additional software, and partial support for Japanese. This live CD is intended to be as close as possible to the default OpenBSD when installed on a hard disk. – quote source

On February 20, 2025, they celebrated their 20th Anniversary of the public release:

"To be precise, it dates back to the release of its predecessor, CD-OpenBSD.

Initially, it was just an experimental project to create an OpenBSD system that could boot from a CD. I never imagined it would last this long.

Now, FuguIta supports three CPU architectures: i386, amd64, and arm64.
It can also be installed and used on a variety of media, including DVDs, USB memory sticks, SD cards, hard disks, and SSDs.

Its use cases have also expanded. While it was originally intended as a way to "try OpenBSD," it is now used not only as a daily PC environment but also as a dedicated machine for servers, routers, and IoT devices.

As a result, FuguIta is now used for various purposes in different countries around the world." [...]

"When I first released CD-OpenBSD 20 years ago, there were many similar OpenBSD-based live systems. However, most of them have ceased development over time, and now FuguIta is likely the only one remaining." [...]

Clive Robinson March 26, 2025 4:52 PM

@ Who?, ALL,

You raise a couple of points,

“We read about the use of those hallucinations as a creativity source for artists and designers.”

“Current AI systems (LLMs) lack “common sense.” They are unable to differentiate what is a key element on an image or text from what is accessory.”

The First is part of the “G for generative” in AGI and ChatGPT.

I contend that it’s a modified “signal-to-noise and distortion ratio”(SINAD) function (well known to analogue electronics / radio engineers)

https://en.m.wikipedia.org/wiki/SINAD

Where what you see as an output signal, is a translation of input “signal + noise + distortion” into another signal via a nonlinear function such as rectification and nonlinear –sometimes non reversible– function.

It’s simple to do with a DSP MAD instruction followed by either a rectification or sigmoid function which are all standard in a DNN “neuron”.

In part it’s why GPT output sounds and looks stylishly the same and often reads like “Marketing Dept Material”.

Your second point is something I’ve talked about before and why I think the lack of “agency” will halt current AI LLM and ML systems developing very much further regardless of how much data or GPU’s you throw at the problem.

But simply the DNN and transformer is looking via correlation to build discriminators. But it has no inbuilt ability to know if any given correlation is significant or irrelevant. Creatures with agency actually “learn from experience” by “testing their view of their environment”.

Current AI LLM and ML systems do not have “agency” therefore their view is “fixed” therefore they cannot “test” their environment or even change their environment.

As an example humans and most other mammals have a sense of distance and direction due to in effect having two eyes, two ears, and two hands (or sets of whiskers etc).

This enables them to actually realise by a small self movement that what looks like a small person is actually a larger person further away. They can then judge threat by distance and come up with a much better danger metric. They can also quickly judge if a distant threat is moving or not by more distant objects and importantly if they are on an intercept course because their relative position does not change.

You can not get this from discrete photographs etc, you need what is effectively continuous input where you chose the frame of reference by self movement.

I’ve known and talked about this limitation since the 1980’s when I used to “play” (more correctly research 😉 with robots and micro-mice. Nearly getting your head smashed in by an industrial robot arm is pause for some considerable thought and reflection.

So your points are observationally correct but they need to go a little further. That is by testing/research to be sufficiently refined so you can get further information to get you to the point you can solve them. The fact that billions of dollars of investment money is not being pushed this way by OpenAI, Google, Meta, Microsoft, Musk, et al should tell you something…

That is they currently see AI as a tool for surveillance rather than actual “Artificial learning and reasoning”.

Clive Robinson March 27, 2025 8:10 AM

@ Who?, ALL,

With regards,

“I just want to know if a BIOS can be hardened enough to be considered reasonably secure against external threats.”

Short answer is,

“No and it will get less secure with time?

The reason is as I’ve mentioned before… There are,

1, Known Knowns
2, Unknown Knowns
3, Unknown Unknowns

That is the BIOS you have is now “static” and won’t change from now on. But… as we know all non simple code is full of bugs that are potential vulnerabilities. With time new attack methods convert bugs to exploitable vulnerabilities which are your “Unknown Unknowns” shifting to “Unknown Known” attacks that is “the method” becomes known but the actual vulnerabilities have to be found[1] thus mitigation is the stratagem on observational evidence (we’ve just seen an example of this with *nix “atop”). As they are found they would become “known knowns” that in non EOL’d code should get “patches” issued in a timely fashion. If it is EOL’d then all you can do is “mitigate” on the observed behaviour of the attack… Which may not be sufficient.

It’s this progression that gave rise to our host @Bruce once commenting,

“Attacks only improve with time…”

So if extra hardening “is out” which it appears to be, then you have to move from “in device fixing” to “outside of device fixing” or “perimeter defence” which is at the end of the day what most “mitigation” is, you just have to decide where to put the perimeter.

Which is usually decided by “policy” and “operational requirements”.

So if you’ve not already drawn them up in sufficient depth is what you need to do.

Oh and get that perimeter either as tight as possible or as broad as possible, anything else would be “half ass” and needing continuous adjustment as “operational requirements” or “policy” from within or from without[2] dictate.

My advice on that is draw plans up for both. Because more often than not mitigating the whole organisation is actually less expensive as well as more beneficial. However mitigating just the systems means any rules can be significantly stricter thus in all probability more secure in the long term.

[1] It’s why you should always mitigate “methods” or “classes” of attack, not a single “instance”. As I point out when giving a talk about it,

“Properly designed ‘fire drills’ should work for all or the most likely reasons to evacuate a building, not just smoke and flames… Bomb threats, incipient bad weather, post bad weather, post earthquake, and even air strike.”

One of the unsung heros of 9/11 was a gentleman who took practicing “fire drills” not just mandatory but frequent. The result nearly everyone who took part in the drills got out safely.

https://en.m.wikipedia.org/wiki/Rick_Rescorla

[2] A known dictate of policy “from without” is when an organisation changes how they do business. One a lot went through was the “Payment Card Industry”(PCI) requirements that require auditing regularly. Another that requires external auditing is “Government Agencies” information handling requirements of which the DoD can be one of the worst and not particularly effective for a number of reasons (hence seen as a waste of “time and money”).

Dancing on thin ice March 27, 2025 9:14 AM

RE: Signal

• The NSA had warned the app was not secure enough to use and subject to attacks.
• The app may have been used because participants could use emojis.
• The chat was set to erase after 4 weeks despite rules on preservation of government communications.
• Using the app on top of skipping complete background checks adds to the appearance of a lackadasial approach to security that may influence the sharing of intelligence from other countries.
• Not much coverage from security experts leaves the public relying upon speculation and those pushing politics.

ResearcherZero March 29, 2025 3:37 PM

@Dancing on thin ice

All of the politicians of each major party were briefed on all of today’s challenges thirty years ago. They have all been asleep in the back of their chauffeur driven cars. At least the chauffeurs knew how to drive. Not only is the current administration unlicensed, but they are drunk at the wheel, having fired the chauffeurs. Despite the faux bravado and claims of strategy – they are all wearing brown underpants – shrouded in adult diapers. A competent government does not need to hide from any scrutiny, or to avoid responsibility and silence it’s critics. Accordingly, a cohesive military is an effective military. One with faith in the command structure.

‘https://eu.usatoday.com/story/news/politics/2025/03/28/pilots-hegseth-signal-atlantic/82702567007/

Military personnel had concerns about Hegseth’s complacency and lack of command experience.
https://www.nbcnews.com/politics/national-security/military-officers-worry-pete-hegseth-turn-blind-eye-us-war-crimes-rcna183732

Three quarters of Americans are concerned over the security handling failure.
https://www.axios.com/2025/03/27/trump-signal-group-chat-yemen-strike-poll

ResearcherZero March 31, 2025 6:59 AM

How to get someone onto the Supreme Court bench in Wisconsin.

‘https://apnews.com/article/wisconsin-supreme-court-petition-million-dollars-law-3501e3c50d6c55e585d67da6b5513208

Sidebar photo of Bruce Schneier by Joe MacInnis.

close