Thursday, April 2, 2026

Humans Are Not The Only Users Of The Internet

Internet Users
It was recently reported that automated traffic on the internet grew nearly eight times faster than human traffic in 2025. The more important shift isn’t the volume — it’s what that automation is actually doing now.

For years, the bot problem was mostly a nuisance. Scrapers grabbed pricing data. Crawlers hoovered up content. Credential stuffers hammered login pages. Those are still real problems. But the nature of automated traffic has changed, and most organizations’ security thinking hasn’t caught up.

AI agents aren’t just reading the web anymore. They’re transacting on it.

A new benchmark report from Human Security, which analyzed more than one quadrillion interactions across its customer base in 2025, puts numbers to the shift. Monthly AI-driven traffic volumes grew 187 percent from January to December. Agentic AI traffic—systems that browse, fill forms, manage accounts and complete purchases on behalf of users—grew 7,851 percent year over year.

An AI agent completing a checkout isn’t just browsing. It’s making a financial decision on behalf of a human user, interacting with payment systems and account infrastructure. The security implications are fundamentally different from a scraper reading your product pages.

Tony Bradkey of Forbes had an opportunity to chat with Todd Thiemann, cybersecurity industry analyst with Omdia, about what that shift means for security teams. His framing was direct: "AI agents hold the promise of improving efficiency and productivity, but those new identities need to be managed and secured for compliance reasons, for cybersecurity reasons and to facilitate growth of the business."

AI agents aren’t just another traffic type to classify. They’re a new category of entity that can act, decide and commit—and most enterprise identity frameworks weren’t built with them in mind.

Security teams have spent years asking one question: is this traffic from a bot or a human? That framing made sense when bots were mostly adversarial and humans were mostly legitimate. It doesn’t hold anymore.

An AI agent browsing product pages, logging into an account and completing a purchase is doing exactly what a sophisticated bot attack looks like. The behavior is functionally identical. The difference is intent—and intent doesn’t show up in a user-agent string.

Across all the interactions analyzed, only half of one percent separates benign automation from malicious automation. Organizations that block all automation will turn away legitimate agentic commerce. Those that allow it unchecked absorb fraud. The real question isn’t whether traffic is automated—it’s whether a given interaction is trustworthy.

Threat actors are targeting the same surfaces where agentic AI operates: product pages, account management flows and checkout. That overlap isn’t coincidental.

Post-login account compromise attempts more than quadrupled in 2025, averaging 402,000 per organization. Login-point defenses have improved enough that attackers now wait until after authentication, abusing session tokens and exploiting weak step-up controls rather than forcing their way through the front door.

Scraping attacks now account for nearly 20 percent of global web traffic at the median — nearly double the rate in 2022. For heavily targeted organizations, it exceeds 60 percent. Carding volume is up 250 percent over the same period.

Researchers have already documented AI agents executing carding attacks—cycling through card additions and payment attempts via agentic browsers, mirroring established fraud workflows without manual effort. The same tools built to help consumers shop are proving equally useful for fraud.

The spoofing problem compounds this. Attackers masquerade as recognized AI crawlers — claiming to be ChatGPT, Mistral, or Perplexity bots — to exploit the trust organizations extend to those names. Whitelisting based solely on user-agent strings grants access to actors who aren’t who they claim to be. And the same company can operate crawlers, scrapers and agentic systems simultaneously, so operator-level access decisions don’t map cleanly to behavior. Declared identity is the starting point, not the answer.

Read More

Wednesday, April 1, 2026

Google Focuses On Neutral Atom Quantum Computing

Google
Google Quantum AI was reported to have broadened its quantum computing roadmap by introducing a neutral atom quantum computing programme alongside its established superconducting qubit research.

In a company blog, Hartmut Neven, founder and lead of Google Quantum AI, wrote that this strategic expansion aims to leverage the respective strengths of both modalities, namely superconducting circuits and neutral atom arrays. He stated that the intention is to advance the timeline for achieving large-scale, fault-tolerant quantum computation.

The move comes as Google expresses increasing confidence that quantum computers with commercial relevance, built on superconducting technology, will become available by the end of the decade. The addition of neutral atom quantum computing is described as complementary, with the potential to accelerate progress on major technical milestones by enabling research on platforms with inherently different scaling and connectivity properties.

In its superconducting systems, Google has developed processors capable of performing millions of gate and measurement cycles, with each cycle lasting approximately one microsecond. These processors have already demonstrated benchmark results, including experiments indicating beyond-classical computational capabilities and progress on quantum error correction.

However, scaling superconducting qubits to the tens of thousands required for practical error-corrected computing remains a significant challenge.

Neutral atom quantum computers, by contrast, exploit individual atoms held in place by optical traps as qubits. These systems have reached arrays of up to ten thousand qubits, surpassing most current superconducting implementations in terms of spatial scale.

Clock cycles for neutral atom operations are slower, measured in milliseconds, but the architecture allows for all-to-all qubit connectivity. This connectivity offers potential advantages in algorithm design and error correction, with the flexibility to implement efficient codes that may reduce overheads required for fault-tolerant operation.

According to Google, the primary challenge for neutral atom systems is to demonstrate deep circuits with many coherent operation cycles, while the immediate focus for superconducting platforms is scaling up the number of physical qubits. By investing in both, Google intends to cross-apply research and engineering developments and to provide versatile platforms suitable for a broader class of quantum algorithms.

As part of the new neutral atom programme, Google has appointed Dr Adam Kaufman to lead experimental efforts. Dr Kaufman, who will continue his roles as JILA Fellow and faculty member at the University of Colorado Boulder, is recognised for his work on atomic, molecular and optical (AMO) physics.

Kaufman said: "I am thrilled to join Google's world-leading programme in quantum computing, and to expand that leadership to a new and highly promising platform of neutral atoms."

The experimental hardware team will be based in Boulder, Colorado, a region that hosts a significant concentration of AMO research at institutions including CU Boulder, JILA and NIST Boulder.

Google’s neutral atom research is built on three pillars, which include adaptation of quantum error correction protocols to the physical connectivity of neutral atom arrays and use of high-performance computational modelling and simulation to optimise hardware architecture and error budgets. The third pillar focuses on experimental development of atomic qubit systems at application-relevant scales.

The three pillars form the basis of the company’s approach to advancing neutral atom quantum computing. Their objective is to reach fault-tolerant performance and to advance hardware capable of supporting practical quantum algorithms.

Google Quantum AI also confirmed continued collaboration with QuEra, a portfolio company engaged in neutral atom quantum computing research.

Furthermore, it states that by tapping into the regional expertise and infrastructure of Boulder’s quantum research community, it aims to drive further developments in both theory and hardware.

Google Quantum AI aims to address ongoing challenges in physics and engineering while pursuing scalable quantum computing architectures using both superconducting and neutral atom technologies.

"Google expands quantum computing with neutral atom programme" was originally created and published by Verdict, a GlobalData owned brand.

Read More

Tuesday, March 31, 2026

Judge Sided With Advertisers In Twitter Lawsuit

Twitter Lawsuit
A United States federal judge has dismissed Elon Musk’s antitrust lawsuit against major advertisers, dealing a blow to X’s legal campaign over an alleged ad boycott following his takeover of Twitter.

US District Judge Jane Boyle ruled that Musk failed to establish a valid antitrust claim.

The court found no evidence that advertisers harmed consumers by pulling back spending on the platform, now known as X. Without consumer harm, Boyle said, antitrust law does not apply.

"The very nature of the alleged conspiracy does not state an antitrust claim, and the Court therefore has no qualm dismissing with prejudice," Boyle said. She further stressed, "the question underlying antitrust injury is whether consumers—not competitors—have been harmed."

Musk’s lawsuit targeted the World Federation of Advertisers and several multinational brands, including Shell, Nestle, Colgate, and Mars. X alleged they coordinated an "illegal boycott" under the umbrella of brand safety initiatives.

The court, however, sided with advertisers.

It found that companies acted within their rights to control where ads appear. Many brands had reduced spending after concerns about rising harmful content on the platform.

Advertisers had formed the Global Alliance for Responsible Media (GARM) to enforce shared standards. This initiative allowed brands to push platforms to meet content guidelines.

Platforms could join but did not control the group.

Judge Boyle noted that Musk appeared to underestimate this collective influence before acquiring Twitter.

Advertisers used coordinated pressure to demand compliance with brand safety norms.

That pressure included warnings of collective action if standards slipped.

The advertiser pullback hit X’s finances hard. At one point in 2023, revenue dropped by as much as 59 percent over a five-week period. The decline followed Musk’s overhaul of moderation systems and the disbanding of internal safety groups.

Read More

Monday, March 30, 2026

Amazon Seeks To Stop SpaceX's Orbital Data Center

SpaceX
In a surprising move, Amazon has thrown itself into the fight over SpaceX's controversial plan to launch a million data centers into Earth's orbit.

In early March 2025, Amazon lodged a complaint against the plan, urging the Federal Communications Commission (FCC) to deny SpaceX's application because SpaceX had not presented any solid details about how it would achieve its plan. Amazon argues that SpaceX's plan would take centuries and force every agency that uses low Earth orbit to schedule around a plan that may never even come to fruition.

Moreover, Amazon cites the FCC's own rules (Section 25.112) that make it clear (via Scribd) the Commission must dismiss all "applicants that, among other things, fail to provide complete information or full answers to the questions asked by the Commission's Part 25 licensing rules." That was met harshly by FCC Chairman Brendan Carr, an ally of SpaceX owner Elon Musk, as he publicly scolded Jeff Bezos' conglomerate in an X post.

Amazon Leo's complaint goes on to point out that others are opposed to the development, noting that the pollution of such systems could nullify the environmental gains won by shifting data center infrastructure northward.

Furthermore, some worry that such constellations could potentially destroy the field of astronomy. Whether SpaceX gets its orbital data center business off the ground may have a major impact on its status as the most valuable company in the world.

SpaceX's data center plan is "the first step towards becoming a Kardeshev II-level civilization," fully harnessing the Sun to propel "humanity's multi-planetary future."

According to the company's FCC proposal (via Scribd), the system's one million satellites will operate in "narrow orbital shells" spanning up to 50 kilometers wide. To fully harness the Sun's rays, the satellites must operate outside the Earth's shadow, necessitating a much higher altitude than current LEO constellations, estimated to be between 500 and 2,000 km. Data transfers will rely "nearly exclusively on high-bandwidth optical links," to transfer data to both Starlink satellites and earthbound customers.

Amazon's criticism isn't without merit. SpaceX's ambitions would require it to launch roughly 63 times more satellites than are currently in low Earth orbit. Google's executive Travis Beals of Google confirmed it will take roughly 10,000 satellites to match the capacity of one gigawatt data campus (via WSJ).

Many were already skeptical of Starlink's 40,000 satellite initiative. SpaceX's FCC proposal does little to explain how it will meet this currently unattainable launch capacity, requiring the company to dramatically escalate its launch rates from once every few days to every few hours.

Further complicating the issue is the likely size of SpaceX's LEO data centers. Experts believe that low-Earth orbit data centers' added hardware will make them much larger than any previous SpaceX satellite (via Sky & Telescope). This will require SpaceX to rely on its troubled Starship rocket, whose viral explosions have delayed NASA's Artemis lunar mission by two years and counting.

To date, the partnership is aiming for a 2028 launch date that many experts are treating with heavy skepticism.

Read More

Friday, March 27, 2026

Jury Rules Against Meta In Child Exploitation Case

Meta
A jury has found Meta Platforms liable in a major child safety case, ruling that the company failed to adequately protect young users on its platforms and misled the public about associated risks.

The verdict, delivered in New Mexico, orders Meta to pay US$ 375 million in damages.

The case stems from a lawsuit filed by New Mexico Attorney General’s Office, which accused Meta of creating conditions that allowed child predators to operate on platforms like Facebook and Instagram.

Jurors concluded that the company engaged in unfair and deceptive trade practices and acted in ways deemed unconscionable under state law.

The trial included evidence from an undercover investigation in which officials created fake accounts posing as minors.

These accounts reportedly received sexually explicit content and solicitations, leading to arrests in some cases. Prosecutors argued that Meta failed to implement adequate safeguards to prevent such interactions.

Meta denied the allegations and said it plans to appeal the ruling.

"We respectfully disagree with the verdict and will appeal," CNN quoted a company spokesperson as saying.

The decision adds to growing legal pressure on social media companies over how their platforms are designed and how effectively they protect vulnerable users.

During the trial, prosecutors argued that Meta’s systems, particularly its recommendation algorithms, could inadvertently connect predators with minors.

Meta, however, maintained that it has invested heavily in safety measures. A spokesperson told CNN, "We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content."

The case also raised concerns about encrypted messaging features, which prosecutors argued could make it harder for law enforcement to detect harmful activity. Meta has indicated it may roll back certain encryption features on Instagram.

Testimony from former employees suggested that internal concerns about safety were not fully addressed.

New Mexico Attorney General Raúl Torrez said in a statement to CNBC, "The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety."

The New Mexico case is part of a broader wave of litigation targeting major tech platforms.

Similar lawsuits across the United States are examining whether social media companies have knowingly designed addictive or harmful features, particularly for younger users.

Legal experts say these cases could reshape how platforms are regulated, especially if courts begin holding companies accountable not just for content, but for design decisions that influence user behavior.

Read More