The Surveillance State: Metaverse Is the Ultimate Surveillance Tool

The Metaverse Is the Ultimate Surveillance Tool

 

Really well done podcast about Metaverse and it’s application for global surveillance the likes of which you have never imagined!

 

 

 

 

The Surveillance State: Twitter Lawsuit and FISC Court Updates

twitter

A federal judge delivered a blow Monday to Twitter’s drive to release more details on surveillance orders it receives, but the tech firm won a chance to try to reformulate its case.

U.S. District Court Judge Yvonne Rogers said the government has the power to prohibit the release of classified information, barring claims Twitter made in a lawsuit filed two years ago challenging as unconstitutional the limits federal officials have placed on publication of some statistics about surveillance demands.

“The First Amendment does not permit a person subject to secrecy obligations to disclose classified national security information,” Rogers wrote, citing a 1980 Supreme Court case about a former CIA analyst publishing the names of CIA personnel overseas. “Twitter has conceded that the aggregate data is classified. In the absence of a challenge to the decisions classifying that information, Twitter’s Constitutional challenges simply do not allege viable claims.”

However, the Oakland, California-based judge’s order went on to essentially invite Twitter to re-file its case, incorporating a claim that government has not “properly classified” the statistics at issue.

Read the Remainder at Politico

FISC

U.S. spy court rejected zero surveillance orders in 2015

The court received 1,457 requests last year on behalf of the National Security Agency and the Federal Bureau of Investigation for authority to intercept communications, including email and phone calls, according to a Justice Department memo sent to leaders of relevant congressional committees on Friday and seen by Reuters. The court did not reject any of the applications in whole or in part, the memo showed.

The total represented a slight uptick from 2014, when the court received 1,379 applications and rejected none.

The court, which acts behind closed doors, was established in 1978 to handle applications for surveillance warrants against foreign suspects by U.S. law enforcement and intelligence agencies and grew more controversial after 2013 leaks by former NSA contractor Edward Snowden.

Read The Remainder at Reuters

“Predictive Policing”: The Cyber Version of “Stop and Frisk”

china

Thanks America! How China’s Newest Software Could Track, Predict, and Crush Dissent

Armed with data from spying on its citizens, Beijing could turn ‘predictive policing’ into an AI tool of repression.

What if the Communist Party could have predicted Tiananmen Square? The Chinese government is deploying a new tool to keep the population from uprising. Beijing is building software to predict instability before it arises, based on volumes of data mined from Chinese citizens about their jobs, pastimes, and habits. It’s the latest advancement of what goes by the name “predictive policing,” where data is used to deploy law enforcement or even military units to places where crime (or, say, an anti-government political protest) is likely to occur. Don’t cringe: Predictive policing was born in the United States. But China is poised to emerge as a leader in the field.

Here’s what that means.

First, some background. What is predictive policing? Back in 1994, New York City Police Commissioner William Bratton led a pioneering and deeply controversial effort to pre-deploy police units to places where crime was expected to occur on the basis of crime statistics.

Bratton, working with Jack Maple, deputy police commissioner, showed that the so-called CompStat decreased crime by 37 percent in just three years. But it also fueled an unconstitutional practice called “stop-and-frisk,” wherein minority youth in the wrong place at the wrong time were frequently targeted and harassed by the police. Lesson: you can deploy police to hotspots before crime occurs but you can cause more problems than you solve.

That was in New York.

Wu Manqing, a representative from China Electronics Technology, the company that the Chinese government hired to design the predictive policing software, described the newest version as “a unified information environment,” Bloomberg reported last week. Its applications go well beyond simply sending police to a specific corner. Because Chinese authorities face far fewer privacy limits on the sorts of information that they can gather on citizens, they can target police forces much more precisely. They might be able to target an individual who suddenly received and deposited a large payment to their bank account, or who reads pro-democracy news sites, or who is displaying a change in buying habits — purchasing more expensive luxury items, for instance. The Chinese government’s control over the Internet in that country puts it in a unique position to extend the reach of surveillance and data collection into the lives of citizens. Chinese authorities plan to deploy the system in places where the relations between ethnic minorities and Chinese party are particularly strained, according to Bloomberg.

 For all the talk in Washington casting China as a rising regional military threat, the country began spending more on domestic security and stability, sometimes called wei-wen, than on building up its military in 2011. More recent numbers are harder to come by, but many China watchers believe the trend has continued.

After the Arab Spring in 2011, Chinese leaders increased internal security spending by 13 percent to 624 billion yuan, outpacing spending on the military, which was 601 billion yuan. That year, the Chinese government compelled 650 cities to improve their ability to monitor public spaces via surveillance cameras and other technologies. “Hundreds of Chinese cities are rushing to construct their safe city platforms by fusing Internet, video surveillance cameras, cell phones, GPS location data and biometric technologies into central ICT meta-systems,” reads the introduction to a 2013 report on Chinese spending on homeland security technologies from the Homeland Security Research Council, a market research firm in Washington.

China soon emerged as the world’s largest market for surveillance equipment. Western companies including Bain Capital, the equity firm founded by former GOP presidential candidate Mitt Romney, all wanted a piece of a pie worth a potential $132 billion (in 2022.)

But collecting massive amounts of data leads inevitably to the question of how to analyze it at scale. China is fast becoming a world leader in the use of machine learning and artificial intelligence for national security. Chinese scientists recently unveiled two papers at the Association for the Advancement of Artificial Intelligence and each points to the future of Chinese research into predictive policing.

One explains how to more easily recognize faces by compressing a Deep Neural Network, or DNN, down to a smaller size. “The expensive computation of DNNs make their deployment difficult on mobile and embedded devices,” it says. Read that to mean: here’s a mathematical formula for getting embedded cameras to recognize faces without calling up a distant database.

The second paper proposes software to predict the likelihood of a “public security event” in different Chinese provinces within the next month. Defense One was able to obtain a short demonstration of the system. Some of the “events” include the legitimately terrifying “campus attack” or “bus explosion” to the more mundane sounding, “strike event” or “gather event,” (the researchers say this was the “gather” incident in question.) all on a scale of severity from 1 to 5. To build it, the researchers relied on a dataset of more than 12,324 disruptive occurrences that took place across different provinces going back to 1998.

The research by itself is not alarming. What government doesn’t have an interest in stopping shootings or even predicting demonstrations?

It’s the Chinese government’s definition of “terrorism” that many in the West find troubling, since the government has used the phantom of public unrest to justify the arrests of peaceful dissidents, such as women’s rights worker Rebiya Kadeer.

Those fears increased after the Chinese government passed new anti-terror legislation in December that expanded government surveillance powers and that compels foreign technology companies to assist Chinese authorities in data collection efforts against Chinese citizens. Specifically, the law says that telecommunication and technology companies “shall provide technical interfaces, decryption and other technical support and assistance to public security and state security agencies when they are following the law to avert and investigate terrorist activities.”

The U.S. objects, and State Department spokesman Mark Toner said the law “could lead to greater restrictions on the exercise of freedoms of expressions, association, and peaceful assembly.” The FBI’s push to compel Apple to provide a different technical interface into Syed Farook’s iPhone is one reason leaders in China are watching the FBI versus Apple debate so closely (and the epitome of irony).

“Essentially, this law could give the authorities even more tools in censoring unwelcome information and crafting their own narrative in how the ‘war on terror’ is being waged,” human rights worker William Nee told the New York Times.

It could also compel foreign technology companies to assist the Chinese government in the acquisition of more data to train predictive policing software efforts. That’s where China’s predictive policing powers enter the picture.

Predictive policing efforts are rising around the United States with programs in Memphis, Tennessee, Chicago, Illinois, Santa Cruz and Los Angeles, California, and elsewhere. Police departments implement them in a variety of ways, many not particularly controversial. Beijing has the resources, will, and the data and inclination to turn predictive policing into something incredibly powerful, and, possibly, quite dreadful.

Read the Original Article at Defense One

Sharpen Your Cyber-Skills: NSA Hacker Chief Explains How to Keep Him OUT of Your System

NSA

IT WAS THE talk most anticipated at this year’s inaugural Usenix Enigma security conference in San Francisco and one that even the other speakers were eager to hear.

Rob Joyce, the nation’s hacker-in-chief, took up the ironic task of telling a roomful of computer security professionals and academics how to keep people like him and his elite corps out of their systems.

Joyce is head of the NSA’s Tailored Access Operations—the government’s top hacking team who are responsible for breaking into the systems of its foreign adversaries, and occasionally its allies. He’s been with the NSA for more than 25 years but only became head of the TAO division in April 2013, just weeks before the first leaks from Edward Snowden were published by the Guardian andWashington Post.

Joyce acknowledged that it was “very strange” for someone in his position to stand onstage before an audience. The TAO has largely existed in the shadowy recesses of the NSA—known and unknown at the same time—until only recently when documents leaked by Snowden and others exposed the workings of this cabal as well as many of its sophisticated hacking tools.

Joyce himself did little to shine a light on the TAO’s classified operations. His talk was mostly a compendium of best security practices. But he did drop a few of the not-so-secret secrets of the NSA’s success, with many people responding to his comments on Twitter.

How the NSA Gets You

In the world of advanced persistent threat actors (APT) like the NSA, credentials are king for gaining access to systems. Not the login credentials of your organization’s VIPs, but the credentials of network administrators and others with high levels of network access and privileges that can open the kingdom to intruders. Per the words of a recently leaked NSA document, the NSA hunts sysadmins.

The NSA is also keen to find any hardcoded passwords in software or passwords that are transmitted in the clear—especially by old, legacy protocols—that can help them move laterally through a network once inside.

And no vulnerability is too insignificant for the NSA to exploit.

“Don’t assume a crack is too small to be noticed, or too small to be exploited,” he said. If you do a penetration test of your network and 97 things pass the test but three esoteric things fail, don’t think they don’t matter. Those are the ones the NSA, and other nation-state attackers will seize on, he explained. “We need that first crack, that first seam. And we’re going to look and look and look for that esoteric kind of edge case to break open and crack in.”

Even temporary cracks—vulnerabilities that exist on a system for mere hours or days—are sweet spots for the NSA.

If you’ve got trouble with an appliance on your network, for example, and the vendor tells you to briefly open the network for them over the weekend so they can pop in remotely and fix it, don’t do it. Nation-state attackers are just looking for an opportunity like this, however brief, and will poke and poke your network patiently waiting for one to appear, he said.

Other vulnerabilities that are favorite attack vectors? The personal devices employees bring into the office on which they’ve allowed their kids to load Steam games, and which the workers then connect to the network.

Read the Remainder at Wired

 

%d bloggers like this: