Infopacalypse Now

This is one of the most important topics for the Martial Citizen today. Recognizing the ability to twist and malign information to fit a certain agenda is the new norm in “News”. Please, Stay truly Informed and Don’t be Fooled!

 

He Predicted The 2016 Fake News Crisis. Now He’s Worried About An Information Apocalypse.

“What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?” technologist Aviv Ovadya warns.

 

In mid-2016Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm. A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse.”

The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms — including a few from Facebook who would later go on to drive the company’s News-Feed integrity effort.

 

Read the Remainder at Buzz Feed News

Surveillance State: Everything We Know About How the FBI Hacks People

fbi-building-157623706

RECENT HEADLINES WARN that the government now has greater authority to hack your computers, in and outside the US. Changes to federal criminal court procedures known as Rule 41 are to blame; they vastly expand how and whom the FBI can legally hack. But just like the NSA’s hacking operations, FBI hacking isn’t new. In fact, the bureau has a long history of surreptitiously hacking us, going back two decades.

That history is almost impossible to document, however, because the hacking happens mostly in secret. Search warrants granting permission to hack get issued using vague, obtuse language that hides what’s really happening, and defense attorneys rarely challenge the hacking tools and techniques in court. There’s also no public accounting of how often the government hacks people. Although federal and state judges have to submit a report to Congress tracking the number and nature of wiretap requests they process each year, no similar requirement exists for hacking tools. As a result, little is known about the invasive tools the bureau, and other law enforcement agencies, use or how they use them. But occasionally, tidbits of information do leak out in court cases and news stories.

A look at a few of these cases offers a glimpse at how FBI computer intrusion techniques have developed over the years. Note that the government takes issue with the word “hacking,” since this implies unauthorized access, and the government’s hacking is court-sanctioned. Instead it prefers the terms “remote access searches” and Network Investigative Techniques, or NIT. By whatever name, however, the activity is growing.

1998: The Short But Dramatic Life of Carnivore

The FBI’s first known computer surveillance tool was a traffic sniffer named Carnivore that got installed on network backbones—with the permission of internet service providers. The unfortunately named tool was custom-built to filter and copy metadata and/or the content of communications to and from a surveillance target. The government had already used it about 25 times, beginning in 1998, when the public finally learned about it in 2000 after Earthlink refused to let the FBI install the tool on its network. Earthlink feared the sniffer would give the feds unfettered access to all customer communications. A court battle and congressional hearing ensued, which sparked a fierce and divisive debate, making Carnivore the Apple/FBI case of its day.

Read the Remainder at Wired

Think Before You Post: The Future of Social Media Monitoring In the United States

This may have occurred in Scotland, but it is just a dark foreshadowing of the kind of “monitoring” and “regulations” coming to Social media here in America. So For all you poor souls out there addicted to Social Media, Good Luck with all that. -SF

glasgowpolice

Glasgow police are warning people not to post unnecessary things on social media or else they might “receive a visit” from the police.

In a tweet Friday morning, Greater Glasgow Police wrote, “Think before you post or you may receive a visit from us this weekend. Use the internet safely.”

They also included a graphic of what people should “think” about before they post:

According to the graphic, people are encouraged to “#thinkbeforeyoupost” anything that is not deemed “necessary” or else they will receive “a visit from us this weekend.”

The tweet by the Greater Glasgow Police Department comes as Police Scotland issued a statement regarding social media comments made by an imam at Glasgow Central Mosque.

The statement from Superintendent Jim Baird of Police Scotland’s Safer Communities Department reads, “Officers have reviewed all comments as reported to Police Scotland, and whilst it is appreciated that individuals raise issues that concern them, on this occasion no criminality has been established.”

Perhaps if the imam had posted something unnecessary on social media as opposed to simply praising a terrorist on social media, the police would have visited him.

Read the Original at MRCTV

Espionage Files: Watch Thy Neighbor

bamford-final-flat1

To prevent whistleblowing, U.S. intelligence agencies are instructing staff to spy on their colleagues.

Elham Khorasani was sitting in her car at a stoplight in Northern Virginia when she got the call. It was April 16, 2013. “I’m with the FBI,” a man on the line said, “and we’re at your home executing a search warrant.”

Khorasani was flummoxed. (A pseudonym is being used to protect her privacy.) The Iran native, a U.S. citizen since the 1990s, had worked as a Farsi and Dari language analyst at the National Security Agency (NSA) going on eight years. She had recently been selected for a second tour at Menwith Hill station, the NSA’s mammoth listening post in northern England. Minutes before the FBI called, she’d left a meeting at the Office of the Director of National Intelligence (ODNI).

“When he said, ‘FBI,’ my mind was going all over the place,” Khorasani says, adding that the most illegal thing she has ever done is get an occasional parking ticket. Yet the agent gave her no information, only instructing her to return to her apartment immediately.

Khorasani describes her life after that day as a nightmare. “They suspended my clearances without giving me any reason,” she remembers. She wasn’t allowed at work, and for two years, the NSA made her “call every day like a criminal, checking in every morning before 8.” Khorasani went to the agency only for interrogations, she says: eight or nine sessions that ran at least five hours each. She was asked about her family, her travel, and her contacts.

Read the Remainder at Foreign Policy

“Predictive Policing”: The Cyber Version of “Stop and Frisk”

china

Thanks America! How China’s Newest Software Could Track, Predict, and Crush Dissent

Armed with data from spying on its citizens, Beijing could turn ‘predictive policing’ into an AI tool of repression.

What if the Communist Party could have predicted Tiananmen Square? The Chinese government is deploying a new tool to keep the population from uprising. Beijing is building software to predict instability before it arises, based on volumes of data mined from Chinese citizens about their jobs, pastimes, and habits. It’s the latest advancement of what goes by the name “predictive policing,” where data is used to deploy law enforcement or even military units to places where crime (or, say, an anti-government political protest) is likely to occur. Don’t cringe: Predictive policing was born in the United States. But China is poised to emerge as a leader in the field.

Here’s what that means.

First, some background. What is predictive policing? Back in 1994, New York City Police Commissioner William Bratton led a pioneering and deeply controversial effort to pre-deploy police units to places where crime was expected to occur on the basis of crime statistics.

Bratton, working with Jack Maple, deputy police commissioner, showed that the so-called CompStat decreased crime by 37 percent in just three years. But it also fueled an unconstitutional practice called “stop-and-frisk,” wherein minority youth in the wrong place at the wrong time were frequently targeted and harassed by the police. Lesson: you can deploy police to hotspots before crime occurs but you can cause more problems than you solve.

That was in New York.

Wu Manqing, a representative from China Electronics Technology, the company that the Chinese government hired to design the predictive policing software, described the newest version as “a unified information environment,” Bloomberg reported last week. Its applications go well beyond simply sending police to a specific corner. Because Chinese authorities face far fewer privacy limits on the sorts of information that they can gather on citizens, they can target police forces much more precisely. They might be able to target an individual who suddenly received and deposited a large payment to their bank account, or who reads pro-democracy news sites, or who is displaying a change in buying habits — purchasing more expensive luxury items, for instance. The Chinese government’s control over the Internet in that country puts it in a unique position to extend the reach of surveillance and data collection into the lives of citizens. Chinese authorities plan to deploy the system in places where the relations between ethnic minorities and Chinese party are particularly strained, according to Bloomberg.

 For all the talk in Washington casting China as a rising regional military threat, the country began spending more on domestic security and stability, sometimes called wei-wen, than on building up its military in 2011. More recent numbers are harder to come by, but many China watchers believe the trend has continued.

After the Arab Spring in 2011, Chinese leaders increased internal security spending by 13 percent to 624 billion yuan, outpacing spending on the military, which was 601 billion yuan. That year, the Chinese government compelled 650 cities to improve their ability to monitor public spaces via surveillance cameras and other technologies. “Hundreds of Chinese cities are rushing to construct their safe city platforms by fusing Internet, video surveillance cameras, cell phones, GPS location data and biometric technologies into central ICT meta-systems,” reads the introduction to a 2013 report on Chinese spending on homeland security technologies from the Homeland Security Research Council, a market research firm in Washington.

China soon emerged as the world’s largest market for surveillance equipment. Western companies including Bain Capital, the equity firm founded by former GOP presidential candidate Mitt Romney, all wanted a piece of a pie worth a potential $132 billion (in 2022.)

But collecting massive amounts of data leads inevitably to the question of how to analyze it at scale. China is fast becoming a world leader in the use of machine learning and artificial intelligence for national security. Chinese scientists recently unveiled two papers at the Association for the Advancement of Artificial Intelligence and each points to the future of Chinese research into predictive policing.

One explains how to more easily recognize faces by compressing a Deep Neural Network, or DNN, down to a smaller size. “The expensive computation of DNNs make their deployment difficult on mobile and embedded devices,” it says. Read that to mean: here’s a mathematical formula for getting embedded cameras to recognize faces without calling up a distant database.

The second paper proposes software to predict the likelihood of a “public security event” in different Chinese provinces within the next month. Defense One was able to obtain a short demonstration of the system. Some of the “events” include the legitimately terrifying “campus attack” or “bus explosion” to the more mundane sounding, “strike event” or “gather event,” (the researchers say this was the “gather” incident in question.) all on a scale of severity from 1 to 5. To build it, the researchers relied on a dataset of more than 12,324 disruptive occurrences that took place across different provinces going back to 1998.

The research by itself is not alarming. What government doesn’t have an interest in stopping shootings or even predicting demonstrations?

It’s the Chinese government’s definition of “terrorism” that many in the West find troubling, since the government has used the phantom of public unrest to justify the arrests of peaceful dissidents, such as women’s rights worker Rebiya Kadeer.

Those fears increased after the Chinese government passed new anti-terror legislation in December that expanded government surveillance powers and that compels foreign technology companies to assist Chinese authorities in data collection efforts against Chinese citizens. Specifically, the law says that telecommunication and technology companies “shall provide technical interfaces, decryption and other technical support and assistance to public security and state security agencies when they are following the law to avert and investigate terrorist activities.”

The U.S. objects, and State Department spokesman Mark Toner said the law “could lead to greater restrictions on the exercise of freedoms of expressions, association, and peaceful assembly.” The FBI’s push to compel Apple to provide a different technical interface into Syed Farook’s iPhone is one reason leaders in China are watching the FBI versus Apple debate so closely (and the epitome of irony).

“Essentially, this law could give the authorities even more tools in censoring unwelcome information and crafting their own narrative in how the ‘war on terror’ is being waged,” human rights worker William Nee told the New York Times.

It could also compel foreign technology companies to assist the Chinese government in the acquisition of more data to train predictive policing software efforts. That’s where China’s predictive policing powers enter the picture.

Predictive policing efforts are rising around the United States with programs in Memphis, Tennessee, Chicago, Illinois, Santa Cruz and Los Angeles, California, and elsewhere. Police departments implement them in a variety of ways, many not particularly controversial. Beijing has the resources, will, and the data and inclination to turn predictive policing into something incredibly powerful, and, possibly, quite dreadful.

Read the Original Article at Defense One