Windows

Microsoft Brings Native PyTorch Arm Support To Windows Devices (neowin.net) 3

Microsoft has announced native PyTorch support for Windows on Arm devices with the release of PyTorch 2.7, making it significantly easier for developers to build and run machine learning models directly on Arm-powered Windows machines. This eliminates the need for manual compilation and opens up performance gains for AI tasks like image classification, NLP, and generative AI. Neowin reports: With the release of PyTorch 2.7, native Arm builds for Windows on Arm are now readily available for Python 3.12. This means developers can simply install PyTorch using a standard package manager like pip.

According to Microsoft: "This unlocks the potential to leverage the full performance of Arm64 architecture on Windows devices, like Copilot+ PCs, for machine learning experimentation, providing a robust platform for developers and researchers to innovate and refine their models."

Security

AI Hallucinations Lead To a New Cyber Threat: Slopsquatting51

Researchers have uncovered a new supply chain attack called Slopsquatting, where threat actors exploit hallucinated, non-existent package names generated by AI coding tools like GPT-4 and CodeLlama. These believable yet fake packages, representing almost 20% of the samples tested, can be registered by attackers to distribute malicious code. CSO Online reports: Slopsquatting, as researchers are calling it, is a term first coined by Seth Larson, a security developer-in-residence at Python Software Foundation (PSF), for its resemblance to the typosquatting technique. Instead of relying on a user's mistake, as in typosquats, threat actors rely on an AI model's mistake. A significant number of packages, amounting to 19.7% (205,000 packages), recommended in test samples were found to be fakes. Open-source models -- like DeepSeek and WizardCoder -- hallucinated more frequently, at 21.7% on average, compared to the commercial ones (5.2%) like GPT 4. Researchers found CodeLlama ( hallucinating over a third of the outputs) to be the worst offender, and GPT-4 Turbo ( just 3.59% hallucinations) to be the best performer.

These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable. When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run. The study concluded that this persistence indicates "that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts." This increases their value to attackers, it added. Additionally, these hallucinated package names were observed to be "semantically convincing." Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. "Only 13% of hallucinations were simple off-by-one typos," Socket added.
The research can found be in a paper on arXiv.org (PDF).
AI

OpenAI Unveils o3 and o4-mini Models (openai.com) 2

OpenAI has released two new AI models that can "think with images" during their reasoning process. The o3 and o4-mini models represent a significant advancement in visual perception, enabling them to manipulate images -- cropping, zooming, and rotating -- as part of their analytical process.

Unlike previous models, o3 and o4-mini can agentically use all of ChatGPT's tools, including web search, Python code execution, and image generation. This allows them to tackle multi-faceted problems by selecting appropriate tools based on the task at hand.

The models have set new state-of-the-art performance benchmarks across multiple domains. On visual tasks, o3 achieved 86.8% accuracy on MathVista and 78.6% on CharXiv-Reasoning, while o4-mini scored 91.6% on AIME 2024 competitions. In expert evaluations, o3 made 20% fewer major errors than its predecessor on complex real-world tasks. ChatGPT Plus, Pro, and Team users will see o3, o4-mini, and o4-mini-high in the model selector starting today, replacing o1, o3â'mini, and o3â'miniâ'high.
Programming

You Should Still Learn To Code, Says GitHub CEO (businessinsider.com) 45

You should still learn to code, says GitHub's CEO. And you should start as soon as possible. From a report: "I strongly believe that every kid, every child, should learn coding," Thomas Dohmke said in a recent podcast interview with EO. "We should actually teach them coding in school, in the same way that we teach them physics and geography and literacy and math and what-not." Coding, he added, is one such fundamental skill -- and the only reason it's not part of the curriculum is because it took "us too long to actually realize that."

Dohmke, who's been a programmer since the 90s, said he's never seen "anything more exciting" than the current moment in engineering -- the advent of AI, he believes, has made the field that much easier to break into, and is poised to make software more ubiquitous than ever. "It's so much easier to get into software development. You can just write a prompt into Copilot or ChatGPT or similar tools, and it will likely write you a basic webpage, or a small application, a game in Python," Dohmke said. "And so, AI makes software development so much more accessible for anyone who wants to learn coding."

AI, Dohmke said, helps to "realize the dream" of bringing an idea to life, meaning that fewer projects will end up dead in the water, and smaller teams of developers will be enabled to tackle larger-scale projects. Dohmke said he believes it makes the overall process of creation more efficient. "You see some of the early signs of that, where very small startups -- sometimes five developers and some of them actually only one developer -- believe they can become million, if not billion dollar businesses by leveraging all the AI agents that are available to them," he added.

Programming

AI Models Still Struggle To Debug Software, Microsoft Study Shows (techcrunch.com) 43

Some of the best AI models today still struggle to resolve software bugs that wouldn't trip up experienced devs. TechCrunch: A new study from Microsoft Research, Microsoft's R&D division, reveals that models, including Anthropic's Claude 3.7 Sonnet and OpenAI's o3-mini, fail to debug many issues in a software development benchmark called SWE-bench Lite. The results are a sobering reminder that, despite bold pronouncements from companies like OpenAI, AI is still no match for human experts in domains such as coding.

The study's co-authors tested nine different models as the backbone for a "single prompt-based agent" that had access to a number of debugging tools, including a Python debugger. They tasked this agent with solving a curated set of 300 software debugging tasks from SWE-bench Lite.

According to the co-authors, even when equipped with stronger and more recent models, their agent rarely completed more than half of the debugging tasks successfully. Claude 3.7 Sonnet had the highest average success rate (48.4%), followed by OpenAI's o1 (30.2%), and o3-mini (22.1%).

AI

Open Source Coalition Announces 'Model-Signing' with Sigstore to Strengthen the ML Supply Chain (googleblog.com) 10

The advent of LLMs and machine learning-based applications "opened the door to a new wave of security threats," argues Google's security blog. (Including model and data poisoning, prompt injection, prompt leaking and prompt evasion.)

So as part of the Linux Foundation's nonprofit Open Source Security Foundation, and in partnership with NVIDIA and HiddenLayer, Google's Open Source Security Team on Friday announced the first stable model-signing library (hosted at PyPI.org), with digital signatures letting users verify that the model used by their application "is exactly the model that was created by the developers," according to a post on Google's security blog. [S]ince models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: "can I trust this model?"

Since its launch, Google's Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing... [T]he signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model...

The average developer, however, would not want to manage keys and rotate them on compromise. These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore's signing mechanism as the default approach for signing ML models.

Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.

"We can view model signing as establishing the foundation of trust in the ML ecosystem..." the post concludes (adding "We envision extending this approach to also include datasets and other ML-related artifacts.") Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world...

To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

Python

Python's PyPI Finally Gets Closer to Adding 'Organization Accounts' and SBOMs (mailchi.mp) 1

Back in 2023 Python's infrastructure director called it "the first step in our plan to build financial support and long-term sustainability of PyPI" while giving users "one of our most requested features: organization accounts." (That is, "self-managed teams with their own exclusive branded web addresses" to make their massive Python Package Index repository "easier to use for large community projects, organizations, or companies who manage multiple sub-teams and multiple packages.")

Nearly two years later, they've announced that they're "making progress" on its rollout... Over the last month, we have taken some more baby steps to onboard new Organizations, welcoming 61 new Community Organizations and our first 18 Company Organizations. We're still working to improve the review and approval process and hope to improve our processing speed over time. To date, we have 3,562 Community and 6,424 Company Organization requests to process in our backlog.
They've also onboarded a PyPI Support Specialist to provide "critical bandwidth to review the backlog of requests" and "free up staff engineering time to develop features to assist in that review." (And "we were finally able to finalize our Terms of Service document for PyPI," build the tooling necessary to notify users, and initiate the Terms of Service rollout. [Since launching 20 years ago PyPi's terms of service have only been updated twice.]

In other news the security developer-in-residence at the Python Software Foundation has been continuing work on a Software Bill-of-Materials (SBOM) as described in Python Enhancement Proposal #770. The feature "would designate a specific directory inside of Python package metadata (".dist-info/sboms") as a directory where build backends and other tools can store SBOM documents that describe components within the package beyond the top-level component." The goal of this project is to make bundled dependencies measurable by software analysis tools like vulnerability scanning, license compliance, and static analysis tools. Bundled dependencies are common for scientific computing and AI packages, but also generally in packages that use multiple programming languages like C, C++, Rust, and JavaScript. The PEP has been moved to Provisional Status, meaning the PEP sponsor is doing a final review before tools can begin implementing the PEP ahead of its final acceptance into changing Python packaging standards. Seth has begun implementing code that tools can use when adopting the PEP, such as a project which abstracts different Linux system package managers functionality to reverse a file path into the providing package metadata.

Security developer-in-residence Seth Larson will be speaking about this project at PyCon US 2025 in Pittsburgh, PA in a talk titled "Phantom Dependencies: is your requirements.txt haunted?"

Meanwhile InfoWorld reports that newly approved Python Enhancement Proposal 751 will also give Python a standard lock file format.
AI

Two Teenagers Built 'Cal AI', a Photo Calorie App With Over a Million Users (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: In a world filled with "vibe coding," Zach Yadegari, teen founder of Cal AI, stands in ironic, old-fashioned contrast. Ironic because Yadegari and his co-founder, Henry Langmack, are both just 18 years old and still in high school. Yet their story, so far, is a classic. Launched in May, Cal AI has generated over 5 million downloads in eight months, Yadegari says. Better still, he tells TechCrunch that the customer retention rate is over 30% and that the app generated over $2 million in revenue last month. [...]

The concept is simple: Take a picture of the food you are about to consume, and let the app log calories and macros for you. It's not a unique idea. For instance, the big dog in calorie counting, MyFitnessPal, has its Meal Scan feature. Then there are apps like SnapCalorie, which was released in 2023 and created by the founder of Google Lens. Cal AI's advantage, perhaps, is that it was built wholly in the age of large image models. It uses models from Anthropic and OpenAI and RAG to improve accuracy and is trained on open source food calorie and image databases from sites like GitHub.

"We have found that different models are better with different foods," Yadegari tells TechCrunch. Along the way, the founders coded through technical problems like recognizing ingredients from food packages or in jumbled bowls. The result is an app that the creators say is 90% accurate, which appears to be good enough for many dieters.
The report says Yadegari began mastering Python and C# in middle school and went on to build his first business in ninth grade -- a website called Totally Science that gave students access to unblocked games (cleverly named to evade school filters). He sold the company at age 16 to FreezeNova for $100,000.

Following the sale, Yadegari immersed himself in the startup scene, watching Y Combinator videos and networking on X, where he met co-founder Blake Anderson, known for creating ChatGPT-powered apps like RizzGPT. Together, they launched Cal AI and moved to a hacker house in San Francisco to develop their prototype.
AI

MCP: the New 'USB-C For AI' That's Bringing Fierce Rivals Together (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: What does it take to get OpenAI and Anthropic -- two competitors in the AI assistant market -- to get along? Despite a fundamental difference in direction that led Anthropic's founders to quit OpenAI in 2020 and later create the Claude AI assistant, a shared technical hurdle has now brought them together: How to easily connect their AI models to external data sources. The solution comes from Anthropic, which developed and released an open specification called Model Context Protocol (MCP) in November 2024. MCP establishes a royalty-free protocol that allows AI models to connect with outside data sources and services without requiring unique integrations for each service.

"Think of MCP as a USB-C port for AI applications," wrote Anthropic in MCP's documentation. The analogy is imperfect, but it represents the idea that, similar to how USB-C unified various cables and ports (with admittedly a debatable level of success), MCP aims to standardize how AI models connect to the infoscape around them. So far, MCP has also garnered interest from multiple tech companies in a rare show of cross-platform collaboration. For example, Microsoft has integrated MCP into its Azure OpenAI service, and as we mentioned above, Anthropic competitor OpenAI is on board. Last week, OpenAI acknowledged MCP in its Agents API documentation, with vocal support from the boss upstairs. "People love MCP and we are excited to add support across our products," wrote OpenAI CEO Sam Altman on X last Wednesday.

MCP has also rapidly begun to gain community support in recent months. For example, just browsing this list of over 300 open source servers shared on GitHub reveals growing interest in standardizing AI-to-tool connections. The collection spans diverse domains, including database connectors like PostgreSQL, MySQL, and vector databases; development tools that integrate with Git repositories and code editors; file system access for various storage platforms; knowledge retrieval systems for documents and websites; and specialized tools for finance, health care, and creative applications. Other notable examples include servers that connect AI models to home automation systems, real-time weather data, e-commerce platforms, and music streaming services. Some implementations allow AI assistants to interact with gaming engines, 3D modeling software, and IoT devices.

Cloud

Microsoft Announces 'Hyperlight Wasm': Speedy VM-Based Security at Scale with a WebAssembly Runtime (microsoft.com) 18

Cloud providers like the security of running things in virtual machines "at scale" — even though VMs "are not known for having fast cold starts or a small footprint..." noted Microsoft's Open Source blog last November. So Microsoft's Azure Core Upstream team built an open source Rust library called Hyperlight "to execute functions as fast as possible while isolating those functions within a VM."

But that was just the beginning... Then, we showed how to run Rust functions really, really fast, followed by using C to [securely] run Javascript. In February 2025, the Cloud Native Computing Foundation (CNCF) voted to onboard Hyperlight into their Sandbox program [for early-stage projects].

[This week] we're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine "micro-guest" that can run wasm component workloads written in many programming languages...

Traditional virtual machines do a lot of work to be able to run programs. Not only do they have to load an entire operating system, they also boot up the virtual devices that the operating system depends on. Hyperlight is fast because it doesn't do that work; all it exposes to its VM guests is a linear slice of memory and a CPU. No virtual devices. No operating system. But this speed comes at the cost of compatibility. Chances are that your current production application expects a Linux operating system running on the x86-64 architecture (hardware), not a bare linear slice of memory...

[B]uilding Hyperlight with a WebAssembly runtime — wasmtime — enables any programming language to execute in a protected Hyperlight micro-VM without any prior knowledge of Hyperlight at all. As far as program authors are concerned, they're just compiling for the wasm32-wasip2 target... Executing workloads in the Hyperlight Wasm guest isn't just possible for compiled languages like C, Go, and Rust, but also for interpreted languages like Python, JavaScript, and C#. The trick here, much like with containers, is to also include a language runtime as part of the image... Programming languages, runtimes, application platforms, and cloud providers are all starting to offer rich experiences for WebAssembly out of the box. If we do things right, you will never need to think about whether your application is running inside of a Hyperlight Micro-VM in Azure. You may never know your workload is executing in a Hyperlight Micro VM. And that's a good thing.

While a traditional virtual-device-based VM takes about 125 milliseconds to load, "When the Hyperlight VMM creates a new VM, all it needs do to is create a new slice of memory and load the VM guest, which in turn loads the wasm workload. This takes about 1-2 milliseconds today, and work is happening to bring that number to be less than 1 millisecond in the future."

And there's also double security due to Wasmtime's software-defined runtime sandbox within Hyperlight's larger VM...
Python

Codon Python Compiler Gets Faster - and Changes to Apache 2 License (usenix.org) 4

Slashdot reader rikfarrow summarizes an article they wrote for Usenix.org about the Open Source Python compiler Codon: In 2023 I tried out Codon. At the time I had difficulty compiling the scripts I most commonly used, but was excited by the prospect. Python is essentially single threaded and checks the shape (type) of each variable as it interprets scripts. Codon fixes types and compiles Python into compact, executable binaries that execute much faster.

Several things have changed with their latest release: I have successful compiles, the committers have added a compiled version of NumPy (high performance math algorithms), and changed their open source license to Apache 2.

"The other big news is that Exaloop, the company that is behind Codon, has changed their license to Apache 2..." according to the article, so "commercial use and derivations of Codon are now permitted without licensing."
Piracy

Malicious PyPI Package Exploited Deezer's API, Orchestrates a Distributed Piracy Operation (socket.dev) 24

A malicious PyPi package effectively turned its users' systems "into an illicit network for facilitating bulk music downloads," writes The Hacker News.

Though the package has been removed from PyPI, researchers at security platform Socket.dev say it enabled "coordinated, unauthorized music downloads from Deezer — a popular streaming service founded in France in 2007." Although automslc, which has been downloaded over 100,000 times, purports to offer music automation and metadata retrieval, it covertly bypasses Deezer's access restrictions... The package is designed to log into Deezer, harvest track metadata, request full-length streaming URLs, and download complete audio files in clear violation of Deezer's API terms... [I]t orchestrates a distributed piracy operation by leveraging both user-supplied and hardcoded Deezer credentials to create sessions with Deezer's API. This approach enables full access to track metadata and the decryption tokens required to generate full-length track URLs.

Additionally, the package routinely communicates with a remote server... to update download statuses and submit metadata, thereby centralizing control and allowing the threat actor to monitor and coordinate the distributed downloading operation. In doing so, automslc exposes critical track details — including Deezer IDs, International Standard Recording Codes, track titles, and internal tokens like MD5_ORIGIN (a hash used in generating decryption URLs) — which, when collected en masse, can be used to reassemble full track URLs and facilitate unauthorized downloads...

Even if a user pays for access to the service, the content is licensed, not owned. The automslc package circumvents licensing restrictions by enabling downloads and potential redistribution, which is outside the bounds of fair use...

"The malicious package was initially published in 2019, and its popularity (over 100,000 downloads) indicates wide distribution..."
AI

27-Year-Old EXE Became Python In Minutes. Is AI-Assisted Reverse Engineering Next? (adafruit.com) 150

Adafruit managing director Phillip Torrone (also long-time Slashdot reader ptorrone) shared an interesting blog post. They'd spotted a Reddit post "detailing how someone took a 27-year-old visual basic EXE file, fed it to Claude 3.7, and watched as it reverse-engineered the program and rewrote it in Python." It was an old Visual Basic 4 program they had written in 1997. Running a VB4 exe in 2024 can be a real yak-shaving compatibility nightmare, chasing down outdated DLLs and messy workarounds. So! OP decided to upload the exe to Claude 3.7 with this request:

"Can you tell me how to get this file running? It'd be nice to convert it to Python.">

Claude 3.7 analyzed the binary, extracted the VB 'tokens' (VB is not a fully-machine-code-compiled language which makes this task a lot easier than something from C/C++), identified UI elements, and even extracted sound files. Then, it generated a complete Python equivalent using Pygame. According to the author, the code worked on the first try and the entire process took less than five minutes...

Torrone speculates on what this might mean. "Old business applications and games could be modernized without needing the original source code... Tools like Claude might make decompilation and software archaeology a lot easier: proprietary binaries from dead platforms could get a new life in open-source too."

And maybe Archive.org could even add an LLM "to do this on the fly!"
Perl

Perl's CPAN Security Group is Now a CNA, Can Assign CVEs (perlmonks.org) 10

Active since 1995, the Comprehensive Perl Archive Network (or CPAN) hosts 221,742 Perl modules written by 14,548 authors. This week they announced that the CPAN Security Group "was authorized by the CVE Program as a CVE Numbering Authority (CNA)" to assign and manage CVE vulnerability identifications for Perl and CPAN Modules.

"This is great news!" posted Linux kernel maintainer Greg Kroah-Hartman on social media, saying the announcement came "Just in time for my talk about this very topic in a few weeks about how all open source projects should be doing this" at the Linux Foundation Member Summit in Napa, California. And Curl creator Daniel Stenberg posted "I'm with Greg Kroah-Hartman on this: all Open Source projects should become CNAs. Or team up with others to do it." (Also posting "Agreed" to the suggestion was Seth Larson, the Python Software Foundation's security developer-in-residence involved in their successful effort to become a CNA in 2023.)

444 CNAs have now partnered with the CVE Program, according to their official web site. The announcement from PerlMonks.org: Years ago, a few people decided during the Perl Toolchain Summit (PTS) that it would be a good idea to join forces, ideas and knowledge and start a group to monitor vulnerabilities in the complete Perl ecosystem from core to the smallest CPAN release. The goal was to follow legislation and CVE reports, and help authors in taking actions on not being vulnerable anymore. That group has grown stable over the past years and is now known as CPANSec.

The group has several focus areas, and one of them is channeling CVE vulnerability issues. In that specific goal, a milestone has been reached: CPANSec has just been authorized as a CVE Numbering Authority (CNA) for Perl and modules on CPAN

Python

Are Fast Programming Languages Gaining in Popularity? (techrepublic.com) 163

In January the TIOBE Index (estimating programming language popularity) declared Python their language of the year. (Though it was already #1 in their rankings, it had showed a 9.3% increase in their ranking system, notes InfoWorld.) TIOBE CEO Paul Jansen says this reflects how easy Python is to learn, adding that "The demand for new programmers is still very high" (and that "developing applications completely in AI is not possible yet.")

In fact on February's version of the index, the top ten looks mostly static. The only languages dropping appear to be very old languages. Over the last 12 months C and PHP have both fallen on the index — C from the #2 to the #4 spot, and PHP from #10 all the way to #14. (Also dropping is Visual Basic, which fell from #9 to #10.)

But TechRepublican cites another factor that seems to be affecting the rankings: language speed. Fast programming languages are gaining popularity, TIOBE CEO Paul Jansen said in the TIOBE Programming Community Index in February. Fast programming languages he called out include C++ [#2], Go [#8], and Rust [#13 — up from #18 a year ago].

Also, according to the updated TIOBE rankings...

- C++ held onto its place at second from the top of the leaderboard.
- Mojo and Zig are following trajectories likely to bring them into the top 50, and reached #51 and #56 respectively in February.

"Now that the world needs to crunch more and more numbers per second, and hardware is not evolving fast enough, speed of programs is getting important. Having said this, it is not surprising that the fast programming languages are gaining ground in the TIOBE index," Jansen wrote. The need for speed helped Mojo [#51] and Zig [#56] rise...

Rust reached its all-time high in the proprietary points system (1.47%.), and Jansen expects Go to be a common sight in the top 10 going forward.

Google

Google Upgrades Open Source Vulnerability Scanning Tool with SCA Scanning Library (googleblog.com) 2

In 2022 Google released a tool to easily scan for vulnerabilities in dependencies named OSV-Scanner. "Together with the open source community, we've continued to build this tool, adding remediation features," according to Google's security blog, "as well as expanding ecosystem support to 11 programming languages and 20 package manager formats... Users looking for an out-of-the-box vulnerability scanning CLI tool should check out OSV-Scanner, which already provides comprehensive language package scanning capabilities..."

Thursday they also announced an extensible library for "software composition analysis" scanning (as well as file-system scanning) named OSV-SCALIBR (Open Source Vulnerability — Software Composition Analysis LIBRary). The new library "combines Google's internal vulnerability management expertise into one scanning library with significant new capabilities such as:
  • Software composition analysis for installed packages, standalone binaries, as well as source code
  • OSes package scanning on Linux (COS, Debian, Ubuntu, RHEL, and much more), Windows, and Mac
  • Artifact and lockfile scanning in major language ecosystems (Go, Java, Javascript, Python, Ruby, and much more)
  • Vulnerability scanning tools such as weak credential detectors for Linux, Windows, and Mac
  • Software Bill of Materials (SBOM) generation in SPDX and CycloneDX, the two most popular document formats
  • Optimization for on-host scanning of resource constrained environments where performance and low resource consumption is critical

"OSV-SCALIBR is now the primary software composition analysis engine used within Google for live hosts, code repos, and containers. It's been used and tested extensively across many different products and internal tools to help generate SBOMs, find vulnerabilities, and help protect our users' data at Google scale. We offer OSV-SCALIBR primarily as an open source Go library today, and we're working on adding its new capabilities into OSV-Scanner as the primary CLI interface."


AI

Nvidia Unveils $3,000 Personal AI Supercomputer (nvidia.com) 80

Nvidia will begin selling a personal AI supercomputer in May that can run sophisticated AI models with up to 200 billion parameters, the chipmaker has announced. The $3,000 Project Digits system is powered by the new GB10 Grace Blackwell Superchip and can operate from a standard power outlet.

The device delivers 1 petaflop of AI performance and includes 128GB of memory and up to 4TB of storage. Two units can be linked to handle models with 405 billion parameters. "AI will be mainstream in every application for every industry," Nvidia CEO Jensen Huang said. The system runs on Linux-based Nvidia DGX OS and supports PyTorch, Python, and Jupyter notebooks.
Programming

Should First-Year Programming Students Be Taught With Python and Java? (huntnewsnu.com) 175

Long-time Slashdot reader theodp writes: In an Op-ed for The Huntington News, fourth year Northeastern University CS student Derek Kaplan argues that real pedagogical merit is what should count when deciding which language to use to teach CS fundamentals (aka 'Fundies'). He makes the case for Northeastern to reconsider its decision to move from Racket to Python and Java later this year in an overhaul of its first-year curriculum.

"Students will get extensive training in Python, which is currently the most requested language by co-op employers," Northeastern explains (some two decades after a Slashdot commenter made the same Hot Languages = Jobs observation in a spirited 2001 debate on Java as a CS introductory language)...

"I have often heard computer science students complain that Fundies 1 teaches Racket instead of a 'useful language' like Python," Kaplan writes. "But the point of Fundies is not to teach Racket — it is to teach program design skills that can be applied using any programming language. Racket is just the tool it uses to do so. A student who does well in Fundies will have no difficulty applying the same skills to Python or any other language. And with how fast the tech industry changes, is it really worth having a course that teaches just Python when tomorrow, some other language might dominate the industry? Our current curriculum focuses on timeless principles rather than fleeting trends."

Also expressing concerns about the selection of suitable languages for novice programming is King's College CS Prof Michael Kölling, who explains, "One of the drivers is the perceived usefulness of the language in a real-world context. Students (and their parents) often have opinions which language is 'better' to learn. In forming these opinions, the definition of 'better' can often be vague and driven by limited insight. One strong aspect commonly cited is the perceived usefulness of a language in the 'real world.' If a language is widely used in industry, it is more likely to be seen as a useful language to learn." Kölling's recommendation? "We need a new language for teaching novices at secondary school and introductory university level," Kölling concludes. "This language should be designed explicitly for teaching [...] Maintenance and adaptation of this language should be driven by pedagogical considerations, not by industry needs."

While noble in intent, one suspects Kaplan and Kölling may be on a quixotic quest in a money wins world, outgunned by the demands, resources, and influence of tech giants like Amazon — the top employer of Northeastern MSCS program grads — who pushed back against NSF advice to deemphasize Java in high school CS and dropped $15 million to have tech-backed nonprofit Code.org develop and push a new Java-based, powered-by-AWS CS curriculum into high schools with the support of a consortium of politicians, educators, and tech companies. Echoing Northeastern, an Amazon press release argued the new Java-based curriculum "best prepares students for the next step in their education and careers."

Python

Python in 2024: Faster, More Powerful, and More Popular Than Ever (infoworld.com) 45

"Over the course of 2024, Python has proven again and again why it's one of the most popular, useful, and promising programming languages out there," writes InfoWorld: The latest version of the language pushes the envelope further for speed and power, sheds many of Python's most decrepit elements, and broadens its appeal with developers worldwide. Here's a look back at the year in Python.

In the biggest news of the year, the core Python development team took a major step toward overcoming one of Python's longstanding drawbacks: the Global Interpreter Lock or "GIL," a mechanism for managing interpreter state. The GIL prevents data corruption across threads in Python programs, but it comes at the cost of making threads nearly useless for CPU-bound work. Over the years, various attempts to remove the GIL ended in tears, as they made single-threaded Python programs drastically slower. But the most recent no-GIL project goes a long way toward fixing that issue — enough that it's been made available for regular users to try out.

The no-GIL or "free-threaded" builds are still considered experimental, so they shouldn't be deployed in production yet. The Python team wants to alleviate as much of the single-threaded performance impact as possible, along with any other concerns, before giving the no-GIL builds the full green light. It's also entirely possible these builds may never make it to full-blown production-ready status, but the early signs are encouraging.

Another forward-looking feature introduced in Python 3.13 is the experimental just-in-time compiler or JIT. It expands on previous efforts to speed up the interpreter by generating machine code for certain operations at runtime. Right now, the speedup doesn't amount to much (maybe 5% for most programs), but future versions of Python will expand the JIT's functionality where it yields real-world payoffs.

Python is now more widely used than JavaScript on GitHub (thanks partly to its role in AI and data science code).
Google

Google's New Jules AI Agent Will Help Developers Fix Buggy Code (theverge.com) 19

Google has announced an experimental AI-powered code agent called "Jules" that can automatically fix coding errors for developers. From a report: Jules was introduced today alongside Gemini 2.0, and uses the updated Google AI model to create multi-step plans to address issues, modify multiple files, and prepare pull requests for Python and Javascript coding tasks in GitHub workflows.

Microsoft introduced a similar experience for GitHub Copilot last year that can recognize and explain code, alongside recommending changes and fixing bugs. Jules will compete against Microsoft's offering, and also against tools like Cursor and even Claude and ChatGPT's coding abilities. Google's launch of a coding-focused AI assistant is no surprise -- CEO Sundar Pichai said in October that more than a quarter of all new code at the company is now generated by AI.

"Jules handles bug fixes and other time-consuming tasks while you focus on what you actually want to build," Google says in its blog post. "This effort is part of our long-term goal of building AI agents that are helpful in all domains, including coding."

Slashdot Top Deals