Saturday, April 4, 2026

Austria Plans To Ban Social Media For Children

Social Media Ban
Austria is the latest country to announced that it plans to ban social media for children aged under 14.

It follows lengthy negotiations within the conservative-led three-party coalition government, but it is not yet clear how or when the ban will be implemented.

Announcing the plans, Vice-Chancellor Andreas Babler of the Social Democrats said the government could not stand by and watch as social media made children "addicted and also often ill".

He said it was the responsibility of politicians to protect children and argued that the issue should be treated no different to alcohol or tobacco: "There must be clear rules in the digital world too."

In future, said Babler, children under 14 would be protected from algorithms that were addictive.

"Other information providers have clear rules to protect young people from harmful content." These, he said, should now be implemented in the digital space.

Austria is the latest among a growing number of countries to consider restricting social media access for children, citing concerns about potentially harmful content made available to them on the platforms.

In a landmark case in the US on Wednesday, a jury found two social media giants had intentionally built addictive algorithms that harmed young people's mental health.

Social media companies point to under-13s being disallowed from joining their platforms - though questions remain about how strictly this is enforced - and versions of their sites with parental controls when challenged on questions of harm.

Australia introduced a ban for under-16s in December, becoming the first nation to do so.

France's lower house approved a ban for under-15s in January. In a post on X French President Emmanuel Macron thanked Austria for "joining the movement".

The UK government has launched a consultation on banning social media for under-16s, while Denmark, Greece, Spain and Ireland are also considering similar moves: Spain and Ireland for under-16s, and Denmark and Greece for under-15s.

Austrian Education Minister Christoph Wiederkehr, from the liberal Neos party, stressed the "harmful" nature of social media, adding: "People need to learn how to use it responsibly."

The state secretary for digitalisation, Alexander Pröll, from the conservative ÖVP, said that a draft bill codifying the ban would be presented by the end of June.

The bill is expected to contain technical details of an agreed mechanism to verify people's ages when accessing social media platforms. Babler said Austria could use an EU system if it was ready, but that it would pursue a national plan if not.

The general secretary of the far-right opposition Freedom Party, Christian Hafenecker, condemned the plans as "a direct attack on young people's freedom of expression and freedom of information".

Read More

Friday, April 3, 2026

AI Videos Sexualizing Black Women Removed By TikTok

Banned By TikTok
TikTok has banned 20 accounts after the BBC featured the use of AI-generated black female influencers to drive users to sites promoting sexually explicit content.

They are part of a growing trend of accounts on Instagram and TikTok that has been criticized as racist, exploitative and misleading because of racial tropes and language used.

The BBC and researchers from the independent AI publication Riddance found dozens of accounts on the two platforms featuring highly sexualized black female digital characters or avatars.

The images and videos were generated by AI but not labelled as such, in apparent breach of the platforms' guidelines.

Nearly all the accounts were on Instagram and about a third also had versions on TikTok. Instagram's parent company Meta told the BBC it was investigating, but did not say it had taken any action.

The avatars are often shown dressed in skimpy swimwear or other revealing clothing and portrayed with exaggerated body shapes.

Some have exceptionally dark skin tones that have been digitally manipulated, giving them an artificial appearance.

Account names include terms such as "black", "noir", "dark" and "ebony". Several include comments about white males in their posts, such as "loves white men" and "why I need a white guy in my life". Many of the accounts follow or like each other.

The BBC, working in collaboration with analysts Jeremy Carrasco and Angel Nulani from Riddance, has identified 60 such accounts, mainly on Instagram, that have carried links, or chains of links, to paid-for sexually explicit content on third-party sites. The sites labelled the imagery as AI-generated, but the Instagram accounts did not.

The research also identified many more accounts on both Instagram and TikTok with similar AI-generated avatars that did not link to paid content.

Read More

Thursday, April 2, 2026

Humans Are Not The Only Users Of The Internet

Internet Users
It was recently reported that automated traffic on the internet grew nearly eight times faster than human traffic in 2025. The more important shift isn’t the volume — it’s what that automation is actually doing now.

For years, the bot problem was mostly a nuisance. Scrapers grabbed pricing data. Crawlers hoovered up content. Credential stuffers hammered login pages. Those are still real problems. But the nature of automated traffic has changed, and most organizations’ security thinking hasn’t caught up.

AI agents aren’t just reading the web anymore. They’re transacting on it.

A new benchmark report from Human Security, which analyzed more than one quadrillion interactions across its customer base in 2025, puts numbers to the shift. Monthly AI-driven traffic volumes grew 187 percent from January to December. Agentic AI traffic—systems that browse, fill forms, manage accounts and complete purchases on behalf of users—grew 7,851 percent year over year.

An AI agent completing a checkout isn’t just browsing. It’s making a financial decision on behalf of a human user, interacting with payment systems and account infrastructure. The security implications are fundamentally different from a scraper reading your product pages.

Tony Bradkey of Forbes had an opportunity to chat with Todd Thiemann, cybersecurity industry analyst with Omdia, about what that shift means for security teams. His framing was direct: "AI agents hold the promise of improving efficiency and productivity, but those new identities need to be managed and secured for compliance reasons, for cybersecurity reasons and to facilitate growth of the business."

AI agents aren’t just another traffic type to classify. They’re a new category of entity that can act, decide and commit—and most enterprise identity frameworks weren’t built with them in mind.

Security teams have spent years asking one question: is this traffic from a bot or a human? That framing made sense when bots were mostly adversarial and humans were mostly legitimate. It doesn’t hold anymore.

An AI agent browsing product pages, logging into an account and completing a purchase is doing exactly what a sophisticated bot attack looks like. The behavior is functionally identical. The difference is intent—and intent doesn’t show up in a user-agent string.

Across all the interactions analyzed, only half of one percent separates benign automation from malicious automation. Organizations that block all automation will turn away legitimate agentic commerce. Those that allow it unchecked absorb fraud. The real question isn’t whether traffic is automated—it’s whether a given interaction is trustworthy.

Threat actors are targeting the same surfaces where agentic AI operates: product pages, account management flows and checkout. That overlap isn’t coincidental.

Post-login account compromise attempts more than quadrupled in 2025, averaging 402,000 per organization. Login-point defenses have improved enough that attackers now wait until after authentication, abusing session tokens and exploiting weak step-up controls rather than forcing their way through the front door.

Scraping attacks now account for nearly 20 percent of global web traffic at the median — nearly double the rate in 2022. For heavily targeted organizations, it exceeds 60 percent. Carding volume is up 250 percent over the same period.

Researchers have already documented AI agents executing carding attacks—cycling through card additions and payment attempts via agentic browsers, mirroring established fraud workflows without manual effort. The same tools built to help consumers shop are proving equally useful for fraud.

The spoofing problem compounds this. Attackers masquerade as recognized AI crawlers — claiming to be ChatGPT, Mistral, or Perplexity bots — to exploit the trust organizations extend to those names. Whitelisting based solely on user-agent strings grants access to actors who aren’t who they claim to be. And the same company can operate crawlers, scrapers and agentic systems simultaneously, so operator-level access decisions don’t map cleanly to behavior. Declared identity is the starting point, not the answer.

Read More

Wednesday, April 1, 2026

Google Focuses On Neutral Atom Quantum Computing

Google
Google Quantum AI was reported to have broadened its quantum computing roadmap by introducing a neutral atom quantum computing programme alongside its established superconducting qubit research.

In a company blog, Hartmut Neven, founder and lead of Google Quantum AI, wrote that this strategic expansion aims to leverage the respective strengths of both modalities, namely superconducting circuits and neutral atom arrays. He stated that the intention is to advance the timeline for achieving large-scale, fault-tolerant quantum computation.

The move comes as Google expresses increasing confidence that quantum computers with commercial relevance, built on superconducting technology, will become available by the end of the decade. The addition of neutral atom quantum computing is described as complementary, with the potential to accelerate progress on major technical milestones by enabling research on platforms with inherently different scaling and connectivity properties.

In its superconducting systems, Google has developed processors capable of performing millions of gate and measurement cycles, with each cycle lasting approximately one microsecond. These processors have already demonstrated benchmark results, including experiments indicating beyond-classical computational capabilities and progress on quantum error correction.

However, scaling superconducting qubits to the tens of thousands required for practical error-corrected computing remains a significant challenge.

Neutral atom quantum computers, by contrast, exploit individual atoms held in place by optical traps as qubits. These systems have reached arrays of up to ten thousand qubits, surpassing most current superconducting implementations in terms of spatial scale.

Clock cycles for neutral atom operations are slower, measured in milliseconds, but the architecture allows for all-to-all qubit connectivity. This connectivity offers potential advantages in algorithm design and error correction, with the flexibility to implement efficient codes that may reduce overheads required for fault-tolerant operation.

According to Google, the primary challenge for neutral atom systems is to demonstrate deep circuits with many coherent operation cycles, while the immediate focus for superconducting platforms is scaling up the number of physical qubits. By investing in both, Google intends to cross-apply research and engineering developments and to provide versatile platforms suitable for a broader class of quantum algorithms.

As part of the new neutral atom programme, Google has appointed Dr Adam Kaufman to lead experimental efforts. Dr Kaufman, who will continue his roles as JILA Fellow and faculty member at the University of Colorado Boulder, is recognised for his work on atomic, molecular and optical (AMO) physics.

Kaufman said: "I am thrilled to join Google's world-leading programme in quantum computing, and to expand that leadership to a new and highly promising platform of neutral atoms."

The experimental hardware team will be based in Boulder, Colorado, a region that hosts a significant concentration of AMO research at institutions including CU Boulder, JILA and NIST Boulder.

Google’s neutral atom research is built on three pillars, which include adaptation of quantum error correction protocols to the physical connectivity of neutral atom arrays and use of high-performance computational modelling and simulation to optimise hardware architecture and error budgets. The third pillar focuses on experimental development of atomic qubit systems at application-relevant scales.

The three pillars form the basis of the company’s approach to advancing neutral atom quantum computing. Their objective is to reach fault-tolerant performance and to advance hardware capable of supporting practical quantum algorithms.

Google Quantum AI also confirmed continued collaboration with QuEra, a portfolio company engaged in neutral atom quantum computing research.

Furthermore, it states that by tapping into the regional expertise and infrastructure of Boulder’s quantum research community, it aims to drive further developments in both theory and hardware.

Google Quantum AI aims to address ongoing challenges in physics and engineering while pursuing scalable quantum computing architectures using both superconducting and neutral atom technologies.

"Google expands quantum computing with neutral atom programme" was originally created and published by Verdict, a GlobalData owned brand.

Read More

Tuesday, March 31, 2026

Judge Sided With Advertisers In Twitter Lawsuit

Twitter Lawsuit
A United States federal judge has dismissed Elon Musk’s antitrust lawsuit against major advertisers, dealing a blow to X’s legal campaign over an alleged ad boycott following his takeover of Twitter.

US District Judge Jane Boyle ruled that Musk failed to establish a valid antitrust claim.

The court found no evidence that advertisers harmed consumers by pulling back spending on the platform, now known as X. Without consumer harm, Boyle said, antitrust law does not apply.

"The very nature of the alleged conspiracy does not state an antitrust claim, and the Court therefore has no qualm dismissing with prejudice," Boyle said. She further stressed, "the question underlying antitrust injury is whether consumers—not competitors—have been harmed."

Musk’s lawsuit targeted the World Federation of Advertisers and several multinational brands, including Shell, Nestle, Colgate, and Mars. X alleged they coordinated an "illegal boycott" under the umbrella of brand safety initiatives.

The court, however, sided with advertisers.

It found that companies acted within their rights to control where ads appear. Many brands had reduced spending after concerns about rising harmful content on the platform.

Advertisers had formed the Global Alliance for Responsible Media (GARM) to enforce shared standards. This initiative allowed brands to push platforms to meet content guidelines.

Platforms could join but did not control the group.

Judge Boyle noted that Musk appeared to underestimate this collective influence before acquiring Twitter.

Advertisers used coordinated pressure to demand compliance with brand safety norms.

That pressure included warnings of collective action if standards slipped.

The advertiser pullback hit X’s finances hard. At one point in 2023, revenue dropped by as much as 59 percent over a five-week period. The decline followed Musk’s overhaul of moderation systems and the disbanding of internal safety groups.

Read More