Tuesday, December 16, 2025

OpenAI Is Adding Layered Safeguards To Protect Itself

Layered Safeguards
Tech giant OpenAI says its cybersecurity-focused models are rapidly advancing, with CTF performance jumping from 27 percent on GPT-5 in August 2025 to 76 percent on GPT-5.1-Codex-Max in November 2025.

According to the report of Neetika Walter from Intersting Engineering, the spike shows how quickly AI systems are acquiring technical proficiency in security tasks.

The report added that company expects future models could reach "High" capability levels under its Preparedness Framework.

That means models powerful enough to develop working zero-day exploits or assist with sophisticated enterprise intrusions.

In anticipation, OpenAI says it is preparing safeguards as if every new model could reach that threshold, ensuring progress is paired with strong risk controls.

OpenAI is expanding investments in models designed to support defensive workflows, from auditing code to patching vulnerabilities at scale.

The company says its aim is to give defenders an edge in a landscape where they are often "outnumbered and under-resourced."

Because offensive and defensive cyber tasks rely on the same knowledge, OpenAI says it is adopting a defense-in-depth approach rather than depending on any single safeguard.

The company emphasizes shaping "how capabilities are accessed, guided, and applied" to ensure AI strengthens cybersecurity rather than lowering barriers to misuse.

OpenAI notes that this work is a long-term commitment, not a one-off safety effort. Its goal is to continually reinforce defensive capacity as models become more capable.

At the foundation, OpenAI uses access controls, hardened infrastructure, egress restrictions, and comprehensive monitoring. These systems are supported by detection and response layers, plus internal threat intelligence programs.

Training also plays a critical role. OpenAI says it is teaching its frontier models "to refuse or safely respond to requests that would enable clear cyber abuse," while staying helpful for legitimate defensive and educational needs.

Company-wide detection systems monitor for potential misuse. When activity appears unsafe, OpenAI may block outputs, redirect prompts to safer models, or escalate to enforcement teams.

Both automated tools and human reviewers contribute to these decisions, factoring in severity, legal requirements, and repeat behavior.

The company is also relying on end-to-end red teaming. External experts attempt to break every layer of defense, "just like a determined and well-resourced adversary," helping identify weaknesses early.

Read More

Thursday, December 11, 2025

Instagram Employees Will Now Have To Report Five Days A Week F2F

Instagram Mosseri
Instagram’s CEO has express his desire to put employees back in the office five days a week, but the last thing he wants is a return to business as usual.

Chief Adam Mosseri said, starting 2 February 2026, U.S. employees will need to return to the office full-time, according to a companywide memo first reported by Business Insider and whose authenticity was confirmed by a spokesperson. However, Instagram’s New York City employees won’t be forced back five days a week until the company has "alleviated the space constraints," and remote employees are exempt from the change.

In justifying the new policy, Mosseri cited the usual corporate talking point of increasing collaboration, and said creativity will be better in-person, too.

Yet he also noted that to create a "winning culture," the Meta subsidiary needed a shake-up of routine. Unnecessary meetings and endless PowerPoints need to be replaced with clear objectives and more prototypes, he wrote. One-on-one meetings should also be biweekly by default, he added, and employees should feel free to decline meetings that fall within their "focus blocks."

"Every six months, we’ll cancel all recurring meetings and only re-add the ones that are absolutely necessary," he wrote.

Instead of slide decks, Mosseri also said employees should be presenting more prototypes—especially when it comes to product overviews. Prototypes, he said, help the company better establish a proof of concept and get a sense of "social dynamics."

"If a deck is necessary, it should be as tight as possible," he wrote.

Product review meetings should also have clear objectives and a clear goal, "I want most of your time focused on building great products, not preparing for meetings," he wrote.

Big tech companies have been slowly doing away with the flexible work-from-home policies that defined the pandemic and the years following, yet few have called for five day returns thus far. Instagram parent Meta has required three days in office for employees as of 2023.

Read More

Saturday, December 6, 2025

IBM CEO Expresses Doubt On Profit Targets Of Amazon And Google

IBM Krishna
While giant tech companies like Google and Amazon tout the billions they’re pouring into AI infrastructure, IBM’s CEO believed that their bets may not pay off like they think.

Arvind Krishna, who has been at the helm of the legacy tech company since 2020, said even a simple calculation reveals there is "no way" tech companies' massive data center investments make sense. This is in part because data centers require huge amounts of energy and investment, Krishna said on the Decoder podcast.

Goldman Sachs estimated earlier this year that the total power usage by the global data center market stood at around 55 gigawatts, of which only a fraction (14 percent) is dedicated to AI. As demand for AI grows, the power required by the data center market could jump to 84 gigawatts by 2027, according to Goldman Sachs.

Yet building out a data center that uses merely one gigawatt costs a fortune—an estimated US$ 80 billion in today’s dollars, according to Krishna. If a single company commits to building out 20 to 30 gigawatts then that would amount to US$ 1.5 trillion in capital expenditures, Krishna said. That’s an investment about equal to Tesla’s current market cap.

All the hyperscalers together could potentially add about 100 gigawatts, he estimated, but that still requires US$ 8 trillion in investment—and the profit needed to balance out that investment is immense.

"It’s my view that there’s no way you’re going to get a return on that, because US$ 8 trillion of capex [capital expenditure] means you need roughly US$ 800 billion of profit just to pay for the interest," he said.

Moreover, thanks to technology’s rapid advance, the chips powering your data center could quickly become obsolete.

"You’ve got to use it all in five years, because at that point, you’ve got to throw it away and refill it," he said.

Krishna added that part of the motivation behind this flurry of investment is large tech companies’ race to be the first to crack AGI, or an AI that can match or surpass human intelligence.

Yet Krishna says there’s at most a 1 percent chance this feat can be accomplished with our current technology, despite the steady improvement of large language models.

"I think it’s incredibly useful for enterprise. I think it’s going to unlock trillions of dollars of productivity in the enterprise, just to be absolutely clear," he said. "That said, I think AGI will require more technologies than the current LLM path."

Read More

Friday, December 5, 2025

Delivery Robots Are Taking Over Chicago Streets

Delivery Robots
Even as Chicago residents got used to seeing federal immigration agents rampaging through their neighborhoods, they were caught unaware when hundreds of unwanted side characters have managed to worm their way in seemingly overnight.

Over the past few months, swarms of delivery robots have taken over the city’s sidewalks, prompting locals to call for them to be shut them down. The delivery robots are part of a pilot program engineered by two companies: Coco, operating pink bots sporting obnoxious flag poles, and Serve, whose green and white bots sports two LED eyes.

As first reported by CBS, over 700 residents from 35 Chicago-area zip codes have signed a petition demanding that the city halt the rollout. Called Sidewalks are for People, the campaign demands city officials pause the pilots until they release data from studies on safety and accessibility, as well as the services’ impact on jobs.

"Are these things safe?" organizer Josh Robertson asked CBS. "Are our sidewalks safer with the robots than they were without? Are they more accessible with the robots than they were without?"

Some collisions have been reported already, including one in which a Coco-bot’s flagpole sent Chicago resident Anthony Jonas to the hospital. "I stumbled over it, and I whacked my eyelid against the visibility flag that’s attached to the robot ... blood and urgent care, stitches. The whole thing," he told CBS.

While some Chicagoans are focused on the civic road, others seem ready for vigilante action.

"I want to smash those things to pieces with hammers," one resident raged in a Chicago subreddit.

"People in LA have started leaving their dog poop bags on top of them," another commenter suggested.

Though several Chicago Aldermen have begun canvasing for residents’ opinions on the pilot programs, it remains to be seen if city officials will halt the rollout.

There’s also the question of how the robots will fare when the sidewalks are covered in Chicago’s notorious ice and snow. The pilot is set to run until at least May of 2026, and Windy City winters aren’t for the weak.

Read More

Thursday, December 4, 2025

Google Looking Into The New Oracle Extortion Wave

Google Hackers
Google Mandiant and Google Threat Intelligence Group (GTIG) have recently disclosed that they are trying to track down a new cluster of activity possibly linked to a financially motivated threat actor known as Cl0p.

The malicious activity involves sending extortion emails to executives at various organizations and claiming to have stolen sensitive data from their Oracle E-Business Suite.

"This activity began on or before 29 September 2025, but Mandiant's experts are still in the early stages of multiple investigations, and have not yet substantiated the claims made by this group," Genevieve Stark, Head of Cybercrime and Information Operations Intelligence Analysis at GTIG, told The Hacker News in a statement.

Mandiant CTO Charles Carmakal described the ongoing activity as a "high-volume email campaign" that's launched from hundreds of compromised accounts, with evidence suggesting that at least one of those accounts has been previously associated with activity from FIN11, which is a subset within the TA505 group.

FIN11, per Mandiant, has engaged in ransomware and extortion attacks as far back as 2020. Previously, it was linked to the distribution of various malware families like FlawedAmmyy, FRIENDSPEAK, and MIXLABEL.

"The malicious emails contain contact information, and we've verified that the two specific contact addresses provided are also publicly listed on the Cl0p data leak site (DLS)," Carmakal added. "This move strongly suggests there's some association with Cl0p, and they are leveraging the brand recognition for their current operation."

That said, Google said it does not have any evidence on its own to confirm the alleged ties, despite similarities in tactics observed in past Cl0p attacks. The company is also urging organizations to investigate their environments for evidence of threat actor activity.

It's currently not clear how initial access is obtained. However, according to Bloomberg, it's believed that the attackers compromised user emails and abused the default password reset function to gain valid credentials of internet-facing Oracle E-Business Suite portals, citing information shared by Halycon.

Read More