Strawberry fields forever will exist for the in-demand fruit, but the laborers who do the backbreaking work of harvesting them might continue to dwindle. While raised, high-bed cultivation somewhat eases the manual labor, the need for robots to help harvest strawberries, tomatoes, and other such produce is apparent.

As a first step, Osaka Metropolitan University Assistant Professor Takuya Fujinaga has developed an algorithm for robots to autonomously drive in two modes: moving to a pre-designated destination and moving alongside raised cultivation beds. The Graduate School of Engineering researcher experimented with an agricultural robot that utilizes lidar point cloud data to map the environment.


Official website for Osaka Metropolitan University. Established in 2022 through the merger of Osaka City University and Osaka Prefecture University.

PRESS RELEASE — Quantum computers promise to speed calculations dramatically in some key areas such as computational chemistry and high-speed networking. But they’re so different from today’s computers that scientists need to figure out the best ways to feed them information to take full advantage. The data must be packed in new ways, customized for quantum treatment.

Researchers at the Department of Energy’s Pacific Northwest National Laboratory have done just that, developing an algorithm specially designed to prepare data for a quantum system. The code, published recently on GitHub after being presented at the IEEE International Symposium on Parallel and Distributed Processing, cuts a key aspect of quantum prep work by 85 percent.

While the team demonstrated the technique previously, the latest research addresses a critical bottleneck related to scaling and shows that the approach is effective even on problems 50 times larger than possible with existing tools.

This book dives into the holy grail of modern physics: the union of quantum mechanics and general relativity. It’s a front-row seat to the world’s brightest minds (like Hawking, Witten, and Maldacena) debating what reality is really made of. Not casual reading—this is heavyweight intellectual sparring.

☼ Key Takeaways:
✅ Spacetime Is Not Continuous: It might be granular at the quantum level—think “atoms of space.”
✅ Unifying Physics: String theory, loop quantum gravity, holography—each gets a say.
✅ High-Level Debates: This is like eavesdropping on the Avengers of physics trying to fix the universe.
✅ Concepts Over Calculations: Even without equations, the philosophical depth will bend your brain.
✅ Reality Is Weirder Than Fiction: Quantum foam, time emergence, multiverse models—all explored.

This isn’t a how-to; it’s a “what-is-it?” If you’re obsessed with the ultimate structure of reality, this is your fix.

☼ Thanks for watching! If the idea of spacetime being pixelated excites you, drop a comment below and subscribe for more mind-bending content.

ChatGPT and alike often amaze us with the accuracy of their answers, but unfortunately, they also repeatedly give us cause for doubt. The main issue with powerful AI response engines (artificial intelligence) is that they provide us with perfect answers and obvious nonsense with the same ease. One of the major challenges lies in how the large language models (LLMs) underlying AI deal with uncertainty.

Until now, it has been very difficult to assess whether LLMs designed for text processing and generation base their responses on a solid foundation of data or whether they are operating on uncertain ground.

Researchers at the Institute for Machine Learning at the Department of Computer Science at ETH Zurich have now developed a method that can be used to specifically reduce the uncertainty of AI. The work is published on the arXiv preprint server.

Artificial intelligence (AI) shows tremendous promise for analyzing vast medical imaging datasets and identifying patterns that may be missed by human observers. AI-assisted interpretation of brain scans may help improve care for children with brain tumors called gliomas, which are typically treatable but vary in risk of recurrence.

Investigators from Mass General Brigham and collaborators at Boston Children’s Hospital and Dana-Farber/Boston Children’s Cancer and Blood Disorders Center trained deep learning algorithms to analyze sequential, post-treatment brain scans and flag patients at risk of cancer recurrence.

Their results are published in NEJM AI.

Quantum computers promise to speed calculations dramatically in some key areas such as computational chemistry and high-speed networking. But they’re so different from today’s computers that scientists need to figure out the best ways to feed them information to take full advantage. The data must be packed in new ways, customized for quantum treatment.

An innovative algorithm for detecting collisions of high-speed particles within nuclear fusion reactors has been developed, inspired by technologies used to determine whether bullets hit targets in video games. This advancement enables rapid predictions of collisions, significantly enhancing the stability and design efficiency of future fusion reactors.

Professor Eisung Yoon and his research team in the Department of Nuclear Engineering at UNIST announced that they have successfully developed a collision detection algorithm capable of quickly identifying collision points of high-speed particles within virtual devices. The research is published in the journal Computer Physics Communications.

When applied to the Virtual KSTAR (V-KSTAR), this algorithm demonstrated a detection speed up to 15 times faster than previous methods. The V-KSTAR is a digital twin that replicates the Korean Superconducting Tokamak Advanced Research (KSTAR) fusion experiment in a three-dimensional virtual environment.

MIT researchers have created a periodic table that shows how more than 20 classical machine-learning algorithms are connected. The new framework sheds light on how scientists could fuse strategies from different methods to improve existing AI models or come up with new ones.

For instance, the researchers used their framework to combine elements of two different algorithms to create a new image-classification that performed 8% better than current state-of-the-art approaches.

The periodic table stems from one key idea: All these algorithms learn a specific kind of relationship between data points. While each algorithm may accomplish that in a slightly different way, the core mathematics behind each approach is the same.

ICLR 2025

Shaden Alshammari, John Hershey, Axel Feldmann, William T. Freeman, Mark Hamilton.

MIT, Microsoft, Google.

(https://mhamilton.net/icon.

[ https://openreview.net/forum?id=WfaQrKCr4X](https://openreview.net/forum?id=WfaQrKCr4X

[ https://github.com/mhamilton723/STEGO](https://github.com/mhamilton723/STEGO

Genome editing has advanced at a rapid pace with promising results for treating genetic conditions—but there is always room for improvement. A new paper by investigators from Mass General Brigham showcases the power of scalable protein engineering combined with machine learning to boost progress in the field of gene and cell therapy.

In their study, the authors developed a machine learning algorithm—known as PAMmla—that can predict the properties of approximately 64 million enzymes. The work could help reduce off-target effects and improve editing safety, enhance editing efficiency, and enable researchers to predict customized enzymes for new therapeutic targets. The results are published in Nature.

“Our study is a first step in dramatically expanding our repertoire of effective and safe CRISPR-Cas9 enzymes. In our manuscript, we demonstrate the utility of these PAMmla-predicted enzymes to precisely edit disease-causing sequences in primary and in mice,” said corresponding author Ben Kleinstiver, Ph.D., Kayden-Lambert MGH Research Scholar associate investigator at Massachusetts General Hospital (MGH).

close