Thursday, January 8, 2026

Microsoft Forced To Scale Down AI Sales Targets

Nadella
Microsoft has reportedly reduced its sales targets for its agentic AI software after struggling to find buyers interested in using it.

In many cases, targets have been slashed by up to 50 percent, suggesting Microsoft overestimated the potential of its new AI tools. Indeed, compared with ChatGPT and Google's Gemini, Copilot is falling behind, raising concerns about Microsoft's substantial AI investment.

Microsoft was an early investor in many of the latest AI companies. It ended up with a serious stake in OpenAI and benefited from early access to its models, creating Bing Chat and Copilot when Google, Meta, and Anthropic were just getting started. But now its momentum has stalled, and like everyone else, it's not making much money from its AI products. That's because no one is buying them, and that is because very few people actually find them useful, The Information reports.

"The Information’s story inaccurately combines the concepts of growth and sales quotas," Microsoft said in a very defensive statement (via Futurism), adding that "aggregate sales quotas for AI products have not been lowered."

Petulance aside, tests from earlier this year found that AI agents failed to complete tasks up to 70 percent of the time, making them almost entirely redundant as a workforce replacement tool. At best, they're a way for skilled employees to be more productive and save time on low-level tasks, but those tasks were already being handed off to lower-level employees. Having an AI do it and fail half the time isn't exactly a winning alternative.

Other AI companies are just doing better, too. Windows Central reports that OpenAI's ChatGPT commands over 61 percent of the market, and Google's Gemini is now less than 1 percent behind Microsoft's 14 percent with Copilot. That's after a 12 percent growth over the last quarter, too, suggesting Gemini is well on its way to becoming the real second-place alternative to ChatGPT.

Read More

Wednesday, December 17, 2025

Is This "Space Force" Or "Magic: The Gathering"?

Space Force
The U.S. Space Force is looking at some unlikely sources of inspiration for naming its spacecraft and space weaponry.

At the 3rd Annual Spacepower Conference, held in Orlando, Florida from 10 to 12 December, Chief of Space Operations Gen. Chance Saltzman told attendees that Space Force is adopting new naming schemes for each of its different mission areas that will "cement the identities of space weapon systems" much like the names of iconic aircraft, such as the A-10 Thunderbolt II or F-22 Raptor, have done for the U.S. Air Force.

But while symbols for some of Space Force's mission areas will be similarly borrowed from real-life animals, others are more mythological in nature, Saltzman said. "These include Norse Pantheon, representing the power and dominance of orbital warfare; mythological creatures, conjuring the cunning and adaptability of cyber warfare systems; constellations, reflecting the reach and enduring connection of satellite communications; and ghosts, evoking the silent presence of space domain awareness, just to name a few," Saltzman said at the conference.

Seven different naming categories were chosen, one for each of Space Force's mission areas:

  • Orbital warfare: Norse pantheon
  • Electromagnetic warfare: Snakes
  • Cyber warfare: Mythological creatures
  • Navigation warfare: Sharks
  • Satellite communications: Constellations
  • Missile warning: Sentinels
  • Space domain awareness: Ghosts
After announcing the new naming scheme, Saltzman explained two specific names that had been chosen for specific spacecraft. The first, a communications satellite in geostationary orbit previously known as the Ultra-High Frequency Follow-On system, will now be known as as Ursa Major.

"The Big Dipper — as you all know, part of the Ursa Major constellation — famously points to Polaris, our north star, always linking us to our most important missions," Saltzman said.

Another spacecraft operated by Space Force's 1st Space Operations Squadron (1 SOPS) used to track satellites in high orbits will now be taking a name from Norse mythology: Bifrost.

"Bifrost is a bridge between Earth and the realm of the gods," Saltzman explained, "just as the Bifrost system in low Earth orbit bridges the divide between the Earth and the higher geostationary orbit of the other 1 SOPS systems."Saltzman stressed that the new naming scheme will help the newest branch of the U.S. military establish its own identity. "These symbols conjure the character of the systems, the importance of their mission, and the identity of the Guardians who employ them," Saltzman said. The new names will serve as "a way to own the identity of our space systems as they enter the joint fight," he added.

Unlike the U.S. Air Force's iconic aircraft or the U.S. Army's ground vehicles such as tanks, the public rarely gets a glimpse at Space Force's assets in orbit. This is partly by design; many of Space Force's spacecraft are highly classified, which can make it difficult for the service to communicate its missions and capabilities both to the public and throughout the U.S. armed services.

Read More

Tuesday, December 16, 2025

OpenAI Is Adding Layered Safeguards To Protect Itself

Layered Safeguards
Tech giant OpenAI says its cybersecurity-focused models are rapidly advancing, with CTF performance jumping from 27 percent on GPT-5 in August 2025 to 76 percent on GPT-5.1-Codex-Max in November 2025.

According to the report of Neetika Walter from Intersting Engineering, the spike shows how quickly AI systems are acquiring technical proficiency in security tasks.

The report added that company expects future models could reach "High" capability levels under its Preparedness Framework.

That means models powerful enough to develop working zero-day exploits or assist with sophisticated enterprise intrusions.

In anticipation, OpenAI says it is preparing safeguards as if every new model could reach that threshold, ensuring progress is paired with strong risk controls.

OpenAI is expanding investments in models designed to support defensive workflows, from auditing code to patching vulnerabilities at scale.

The company says its aim is to give defenders an edge in a landscape where they are often "outnumbered and under-resourced."

Because offensive and defensive cyber tasks rely on the same knowledge, OpenAI says it is adopting a defense-in-depth approach rather than depending on any single safeguard.

The company emphasizes shaping "how capabilities are accessed, guided, and applied" to ensure AI strengthens cybersecurity rather than lowering barriers to misuse.

OpenAI notes that this work is a long-term commitment, not a one-off safety effort. Its goal is to continually reinforce defensive capacity as models become more capable.

At the foundation, OpenAI uses access controls, hardened infrastructure, egress restrictions, and comprehensive monitoring. These systems are supported by detection and response layers, plus internal threat intelligence programs.

Training also plays a critical role. OpenAI says it is teaching its frontier models "to refuse or safely respond to requests that would enable clear cyber abuse," while staying helpful for legitimate defensive and educational needs.

Company-wide detection systems monitor for potential misuse. When activity appears unsafe, OpenAI may block outputs, redirect prompts to safer models, or escalate to enforcement teams.

Both automated tools and human reviewers contribute to these decisions, factoring in severity, legal requirements, and repeat behavior.

The company is also relying on end-to-end red teaming. External experts attempt to break every layer of defense, "just like a determined and well-resourced adversary," helping identify weaknesses early.

Read More

Thursday, December 11, 2025

Instagram Employees Will Now Have To Report Five Days A Week F2F

Instagram Mosseri
Instagram’s CEO has express his desire to put employees back in the office five days a week, but the last thing he wants is a return to business as usual.

Chief Adam Mosseri said, starting 2 February 2026, U.S. employees will need to return to the office full-time, according to a companywide memo first reported by Business Insider and whose authenticity was confirmed by a spokesperson. However, Instagram’s New York City employees won’t be forced back five days a week until the company has "alleviated the space constraints," and remote employees are exempt from the change.

In justifying the new policy, Mosseri cited the usual corporate talking point of increasing collaboration, and said creativity will be better in-person, too.

Yet he also noted that to create a "winning culture," the Meta subsidiary needed a shake-up of routine. Unnecessary meetings and endless PowerPoints need to be replaced with clear objectives and more prototypes, he wrote. One-on-one meetings should also be biweekly by default, he added, and employees should feel free to decline meetings that fall within their "focus blocks."

"Every six months, we’ll cancel all recurring meetings and only re-add the ones that are absolutely necessary," he wrote.

Instead of slide decks, Mosseri also said employees should be presenting more prototypes—especially when it comes to product overviews. Prototypes, he said, help the company better establish a proof of concept and get a sense of "social dynamics."

"If a deck is necessary, it should be as tight as possible," he wrote.

Product review meetings should also have clear objectives and a clear goal, "I want most of your time focused on building great products, not preparing for meetings," he wrote.

Big tech companies have been slowly doing away with the flexible work-from-home policies that defined the pandemic and the years following, yet few have called for five day returns thus far. Instagram parent Meta has required three days in office for employees as of 2023.

Read More

Saturday, December 6, 2025

IBM CEO Expresses Doubt On Profit Targets Of Amazon And Google

IBM Krishna
While giant tech companies like Google and Amazon tout the billions they’re pouring into AI infrastructure, IBM’s CEO believed that their bets may not pay off like they think.

Arvind Krishna, who has been at the helm of the legacy tech company since 2020, said even a simple calculation reveals there is "no way" tech companies' massive data center investments make sense. This is in part because data centers require huge amounts of energy and investment, Krishna said on the Decoder podcast.

Goldman Sachs estimated earlier this year that the total power usage by the global data center market stood at around 55 gigawatts, of which only a fraction (14 percent) is dedicated to AI. As demand for AI grows, the power required by the data center market could jump to 84 gigawatts by 2027, according to Goldman Sachs.

Yet building out a data center that uses merely one gigawatt costs a fortune—an estimated US$ 80 billion in today’s dollars, according to Krishna. If a single company commits to building out 20 to 30 gigawatts then that would amount to US$ 1.5 trillion in capital expenditures, Krishna said. That’s an investment about equal to Tesla’s current market cap.

All the hyperscalers together could potentially add about 100 gigawatts, he estimated, but that still requires US$ 8 trillion in investment—and the profit needed to balance out that investment is immense.

"It’s my view that there’s no way you’re going to get a return on that, because US$ 8 trillion of capex [capital expenditure] means you need roughly US$ 800 billion of profit just to pay for the interest," he said.

Moreover, thanks to technology’s rapid advance, the chips powering your data center could quickly become obsolete.

"You’ve got to use it all in five years, because at that point, you’ve got to throw it away and refill it," he said.

Krishna added that part of the motivation behind this flurry of investment is large tech companies’ race to be the first to crack AGI, or an AI that can match or surpass human intelligence.

Yet Krishna says there’s at most a 1 percent chance this feat can be accomplished with our current technology, despite the steady improvement of large language models.

"I think it’s incredibly useful for enterprise. I think it’s going to unlock trillions of dollars of productivity in the enterprise, just to be absolutely clear," he said. "That said, I think AGI will require more technologies than the current LLM path."

Read More