European Commission Launches Research Unit to Investigate Algorithms Used by Big Tech

The European Commission has taken a significant step towards regulating Big Tech by launching a new research unit called the European Centre for Algorithmic Transparency (ECAT). The primary focus of ECAT is to investigate the impact of algorithms made and used by prominent online platforms and search engines such as Facebook and Google. The team will analyze and evaluate the AI-backed algorithms used by Big Tech firms to identify and address any potential risks posed by these platforms.

The European Union’s existing Joint Research Centre will embed ECAT, which conducts research on a broad range of subjects including artificial intelligence. The team will consist of data scientists, AI experts, social scientists, and legal experts. The group’s focus will be to conduct algorithmic accountability and transparency audits, as required by the Digital Services Act, a set of European Union rules enforceable as of Nov. 16, 2022.

AI-based programs are built using a series of complex algorithms, meaning ECAT will also be looking at algorithms that underpin AI chatbots such as OpenAI’s ChatGPT, which some believe could eventually replace search engines. The team will examine the algorithms used by Big Tech firms to ensure that they are transparent and that their operations do not harm users.

According to Thierry Breton, the EU’s internal market commissioner, ECAT will “look under the hood” of large search engines and online platforms to “see how their algorithms function and contribute to the spread of illegal and harmful content.” This move by the European Commission is a significant development in regulating Big Tech firms, and it will ensure that these companies are held accountable for the impact of their algorithms on society.

The development of AI has been a contentious issue, with nearly a dozen EU politicians calling for the “safe” development of AI in a signed open letter on April 16. The lawmakers asked United States President Joe Biden and European Commission President Ursula von der Leyen to convene a summit on AI and agree on a set of governing principles for the development, control, and deployment of the tech.

Tech entrepreneur Elon Musk also expressed his concerns about the development of AI. He argued on an April 17 Fox News interview that AI chatbots like ChatGPT have a left-wing bias and said that he was developing an alternative called “TruthGPT.” This move by Musk highlights the growing concerns about the ethical implications of AI and its impact on society.

In conclusion, the launch of ECAT by the European Commission is a significant development in regulating Big Tech firms. It will ensure that these companies are held accountable for the impact of their algorithms on society, and it will also help to identify and address any potential risks posed by these platforms. The team of experts at ECAT will play a vital role in conducting algorithmic accountability and transparency audits to ensure that the algorithms used by Big Tech firms are transparent and do not harm users.

Source

Tagged : / / / / /

Global Governments Request More Personal Data from Big Tech

The issue of how Big Tech companies handle user data has been a topic of debate for some time. Meta, Apple, Google, and Microsoft are often accused of collecting and selling the personal data of their users. However, the exact destination of this data and the extent to which companies and governments have access to it remain unclear. A recent study by Surfshark sheds light on the rising trend of government requests for personal user data from Big Tech firms.

The study, which focused on the period from 2013 to 2021, found that requests for personal data have been increasing over time. The year 2020 saw the largest year-over-year increase of 38%, followed by a 25% increase in 2021. The survey included Meta, Microsoft, Apple, and Google, with Meta having the most accounts of interest from authorities. Two out of five accounts hosted by Meta were requested (6.6 million) during the study period. Apple, on the other hand, had the fewest, with just 416,000 requested accounts from global authorities.

The report reveals that 60% of the requests for personal data came from authorities in the United States and Europe. However, the U.S. requested more than double the accounts per 100,000 users than all countries in the European Union combined. The top countries following the U.S. in terms of data requests were Germany, Singapore, the United Kingdom, and France.

According to the study, most data requests are related to criminal investigations and civil or administrative cases in which digital data is necessary. Gabriele Kaveckyte, a member of the privacy counsel at Surfshark, highlights that authorities are also exploring ways to monitor and tackle crime via online services. While this can help solve serious criminal cases, civil society organizations have expressed concerns about the promotion of surveillance techniques.

The tech companies’ disclosure rate of user data has increased by nearly 71%, with Apple leading the pack with an average disclosure rate of 86% in 2021 and 82% across the study period. However, Big Tech’s monopoly on user information has led to calls for decentralized solutions such as Web3 tools to safeguard personal data. Some have even suggested that Web2 platforms like Facebook and Twitter will become obsolete thanks to blockchain technology.

In February, a decentralized version of Twitter called Damus officially launched in app stores, offering a “social network you control.” Even Big Tech companies have begun to break into the Web3 space, with Meta unsuccessfully introducing nonfungible tokens on Instagram and Facebook. Despite these efforts, there is still much debate and uncertainty regarding the handling of personal data by tech companies and the extent of government access to such data.

Source

Tagged : / / / / /

Could The New “China Model” Be The Reason The Country Banned Bitcoin Mining?

What is the new “China Model”? And why would that country ban an industry that made them the ultimate leaders in the most important development in recent times? The world is still scratching its head. There has to be something else to this story. Is it only control that they want? Or does China have a secret plan nobody’s been able to figure out?

We at NewsBTC have been studying the case, looking for clues, reporting on related news. After the ban, when Bitcoin’s hash rate collapsed, we posed Bitcoin Magazine’s Lucas Nuzzi’s theory that it all had to do with the Digital Yuan, China’s CBDC. Then, we found out Chinese entrepreneurs are selling small hydropower stations and wondered if decommissioning them was part of their plan. After that, the shocking reveal that China’s dominance over Bitcoin mining was already waning before the ban raised more questions than answers.

The fine people at Bloomberg might’ve found new clues by tackling a related but different question. In the article titled “The China Model: What the Country’s Tech Crackdown Is Really About,” they pose a theory about the reasons behind their attack on Alibaba and DiDi. Two of China’s giant unicorn tech companies, also world leaders in their respective fields. Bloomberg thinks that, after following Silicon Valley’s footsteps for years, China is trying a new model.

5 BTC + 300 Free Spins for new players & 15 BTC + 35.000 Free Spins every month, only at mBitcasino. Play Now!

Do they have a case or do China’s motives remain a mystery for us westerners? Keep reading to find out.

What Does The New China Model Consists Of?

The article starts by summarizing what happened when Uber-clone DiDi and “Alibaba’s fintech offshoot, Ant Group Co.” tried to do public in the United States. The Chinese government started actions against both companies. Alibaba’s Jack Ma disappeared from the public eye as a result.  

“Just because you are a highly successful tech company does not mean you are above the CCP,” says Michael Witt, a senior affiliate professor of strategy and international business at Insead in Singapore. “Ant Group and Jack Ma found that out for themselves last year, and it is surprising DiDi did not get the message.”

What does this “China Model” have to do with Bitcoin mining? Well, the Chinese government seems to be cracking down on everything huge and technological that isn’t aligned with their interests. And we in the industry know how much Bitcoin those immense mines were producing.

Get 110 USDT Futures Bonus for FREE!

“China is actually taking the lead in setting some boundaries around the power of Big Tech,” says Thomas Tsao, co-founder of Gobi Partners, a venture capital firm based in Shanghai. “People are missing the bigger picture. They’re trying a new model.”

Is Size the Problem For The Chinese Government?

As we learned when we analyzed the “The Death Of China’s Bitcoin Mining Industry” article, China only banned industrial Bitcoin mining. Individuals can still mine.

“Despite the government’s hardline approach, Ye is determined to carry on: “This industry is extremely volatile. High emotions and stress are involved, but that’s also its appeal. Companies are banned from mining Bitcoin, but individuals aren’t,” Ye said, adding that he plans to turn around his operation by purchasing old equipment and downsizing.”

The Chinese government was only worried about industrial-sized private mining operations. The question is why. What are they planning? 

The Chinese government seems to be playing a similar game when it comes to Big Tech.

Andy Tian, who led Google China’s mobile strategy in the 2000s and is now CEO at Beijing social media startup Asian Innovations Group, says it will be “positive for innovation” and “competition in China will be fiercer than in the U.S.,” because smaller companies will benefit from policies that rein in the largest competitors.

And they’re using the country’s unique characteristics to do this fast and mercilessly.

Angela Zhang, director of Hong Kong University’s Centre for Chinese Law and the author of Chinese Antitrust Exceptionalism, says the intervention will reshape the tech industry in China faster than it could happen elsewhere. “The case against Alibaba took the Chinese antitrust authority only four months to complete, whereas it will take years for U.S. and EU regulators to go after tech firms such as Facebook, Google, and Amazon, who are ready to fight tooth and nail,” she says.

BTCUSD price chart for 08/10/2021 - TradingView

BTCUSD price chart for 08/10/2021 - TradingView


BTC price chart for 08/10/2021 on Coinbase | Source: BTC/USD on TradingView.com

What Does The New China Model Want To Achieve?

This is where Bloomberg’s case falls flat. They have no idea what the Chinese are thinking.

If China is abandoning the Silicon Valley model, what will it replace it with? Insiders suggest it will be less founder-driven and more China-centric.

Why is China dwarfing its biggest industries and players? Is the “China Model” just concerned with scale? Or is control their focus? Are they cracking down on people and companies with too much power that work on a global scale? We wouldn’t know. However, this paragraph’s facts and assumptions could provide a clue.

Xi has called the data its tech industry collects “an essential and strategic resource” and has been pushing to tap into it for years. Following a 2015 mandate, cities from Guiyang to Shanghai have set up data exchanges that facilitate the transfer of anonymized information between corporations. This could lead to a nationalized data-sharing system that serves as a kind of digital public infrastructure, putting a massive trove of data into the central government’s hands.

Is it data they’re after? Does Bitcoin’s pseudo-anonymity scare them? Is their crackdown on Big Tech even related to their crackdown on Bitcoin mining? There’s only one thing we can know for sure: China’s making big coordinated moves when it comes to tech. And they seem to have a plan. A “China Model,” if you will.

Featured Image by Markus Winkler from Pixabay - Charts by TradingView

Source

Tagged : / / / / / / / / / / / / / /

Dan Larimer Reveals New Project to Combat “Tyranny” of Twitter

Key Takeaways

  • Larimer has long had an interest in social media platforms, although previous attempts like Voice were highly centralized.
  • He encouraged people to abandon Twitter after Donald Trump was banned for inciting a violent riot in the U.S. capitol.
  • The Clarion project was announced the day after far-right platform Gab was hacked.


Share this article



After quitting his role as EOS CTO in January, Dan Larimer has announced a new censorship-resistance social media project called Clarion.

Larimer Takes on “Big Tech”

Larimer made a GitHub post explaining the concept of his new project. He described a censorship-resistant “friend to friend” network that mirrors the “performance and reliability of a centralized service with the freedom and independence of a decentralized service.”

Larimer claimed that his project would free friends and family from the so-called “tyranny of Twitter, Facebook, YouTube, Amazon, and Google.”

The project drew inspiration from RetroShare, Hive (formerly Steemit), and Voice, with Larimer saying RetroShare is much closer to the desired level of centralization. Clarion will facilitate email, video chats, and other forms of message propagation.

Larimer wrapped up his announcement by claiming “Big Tech” has locked its users into its services and no longer produces tools to empower people.

Far-Right Politics on the Blockchain

In January, Larimer indicated that his interest in censorship-resistant technology was due in part to Twitter banning former U.S. President Donald Trump from Twitter.


Trump was banned on January 08 for inciting violence at the U.S. capitol, with Larimer stating that it was “time to abandon Twitter” the next day.


Larimer also appeared to encourage users to download the Parler social media app from Apple before it was removed. Parler gained notoriety as a hub for white supremacists and far-right extremists marketed as a censorship-resistant social media platform.

Parler was banned from Amazon and other platforms, as Larimer seems to reference in his cryptic GitHub post.

However, Parler suffered a major security breach when hacktivist @donk_enby scraped the platform and extracted publicly available metadata revealing extensive information on Parler users, including identities and metadata.


The information could potentially identify many of the people who were involved in the storming of the U.S. capital.


Far-right social media platform Gab was also breached recently, including Donald Trump’s personal account. With Gab and Parler both out of action, the far-right social media presence is limited to more centralized platforms. Larimer announced his project the day after the Gab hack was made public.

Disclosure: The author held Bitcoin at the time of writing.

Share this article




Source

Tagged : / / / / / / / / / / /

US Senate Bill Re-Introduces Suspicious Activity Reports for Social Media

Another challenge to Section 230 of the Communications Decency Act, which protects tech platforms from being liable for various forms of content posted on them, has re-emerged, with bipartisan support. It takes a page from the Banking Secrecy Act (BSA) but, rather than filing Suspicious Activity Reports (SARs), the bill would force tech companies to file “Suspicious Transmission Activity Reports” (STARs) for “illegal activity” on their platforms. 

This week, senators Joe Manchin of West Virginia and John Cornyn of Texas reintroduced their “See Something Say Something Online” act, which would force tech companies “to report suspicious activity to law enforcement, similar to the way that banks are required to report suspicious transactions over $10,000 or others that might signal criminal activity.”

According to a summary document from Manchin’s office, companies are “largely shielded from liability for the actions taken by individuals on their platforms, lacking incentives to clean up illicit activity. Even when they do take action, they often just delete the data rather than turning it over to the appropriate authorities, making it more difficult for law enforcement to go after bad actors online. It is past time to hold these sites accountable, and for them to say something when they see something online.”


But many questions remain about why such a bill is needed, including concerns over what actions could fall under the broad umbrella it lays out and what data would be collected. 

Anne Fauvre-Willis is COO at Oasis Labs, a company that focuses on data privacy. She says this is a great example of a bill with nice intentions in theory, but costly implications in practice. 

“I understand regulators want to put more onus on tech companies to protect their users, but this does the opposite,” said Fauvre-Willis in an email. “It violates individuals’ right to privacy and removes them from any sense of control of their data in an undeliberate way.”


No STARs? No Section 230 protections

The bill would create a system “similar to the Bank Secrecy Act by authorizing the creation of an office within the Department of Justice (DOJ) to act as the clearinghouse for these reports, similar to the Financial Crimes Enforcement Network (FinCEN) within the Department of Treasury,” according to a press release from Manchin’s office. 

The bill was re-introduced to raise the threshold of what is required to be reported as “serious crimes,” which the release identifies as drug sales, hate crimes, murder or terrorism, to “ensure that users’ privacy remains safe.”

Read more: FinCEN Encourages Banks to Share Customer Information With Each Other

Tech companies would have to send STARs within 30 days of becoming aware of any such information. “Suspicious transmissions” could include a wide array of material, including a “public or private post, message, comment, tag, transaction, or any other user-generated content or transmission that commits, facilitates, incites, promotes, or otherwise assists the commission of a major crime.”

If the companies choose not to do so, they will be stripped of Section 230 protections, with the end result likely being they would be sued into oblivion. 

By threatening to remove Section 230 protections for failing to comply with the bill, it makes the filings of STARs mandatory in practice if not in word. So, to ensure these companies are able to continue to exist they will be forced to further transgress upon users’ data privacy. 

STARs would be accompanied by a host of personal information associated with the post’s originator. 


They would include the name, location and identity information given to the platform; the time, origin and destination of the transmission; any relevant text, information and metadata related to it. It’s not clear how wide or narrow that relevant information could be. Entities filing STARs would have to keep them on record for five years after filing them. 

A blanket gag order also means the targets of STARs would not be informed about them. And STARs would also not be subject to Freedom of Information Act (FOIA) requests.

Additionally, the bill calls for the creation of a department under the DOJ to manage these reports. There would also be a centralized online resource established that could be used by any member of the public to report to law enforcement any suspicious activity related to “major crimes.” 

“With an overly broad definition of reporting ‘suspicious activity,’ the bill completely ignores consumer privacy protections and defaults to a world where the government knows best,” said Fauvre-Willis. 

“In practice what this means is that, if passed, companies would have to pass along large swaths of data that may be relevant but also very much may not be. This data could include sensitive information about individuals including emails, age, social security numbers and who knows what else.”

How STARs create a data honeypot

Compelling companies to divulge personal information on a regular basis with regards to the billions of posts, messages, tags and other actions people take every day seems like a great way to create a massive honeypot of personal data, one that has troubling implications. 

 “The ‘see something, say something’ approach has been thoroughly debunked in the offline context – as leading to invasions of privacy while not advancing public safety – and it would be even more negative in the context of online platforms,” said Nadine Strossen, a law professor at New York University and former president of the ACLU.


The bill specifically outlines the creation of a centralized online resource where people (anyone, seemingly) could file STARs. Whether tech companies would then have to provide personal information on users who had STARs filed against them by members of the public is an open question the 11-page bill fails to address.

Read more: How FinCEN Became a Honeypot for Sensitive Personal Data

“Creating a clearinghouse for this data in a centralized system run by the federal government seems fraught for security risk,” said Fauvre-Willis. “Holding sensitive data is no easy task, and sharing it in a way that is safe and protected, even harder. And once the government has this data what will they do with it? This bill feels fraught with challenges and half-thinking.”

Data is sensitive, and the avalanche of data this might produce means that it could be a succulent honeypot for people who might be interested in using that data in ways that are only limited by the extent of their imagination. 

“It’s creating a facility for the public to report bad tweets,” said Jerry Brito, the executive director of Coin Center, in a phone call. “Have you seen Twitter?”

Strossen said the legislation would also encourage and empower anyone to wreak havoc on particular users or platforms, simply by filing a STAR. 

“Given the vague, broad descriptions of ‘suspicious activity,’ which turn on subjective judgments,  a limitless array of posts could be claimed to fit within them,” she said in an email.  “People could weaponize this law to make life miserable for anyone from political opponents, to economic competitors, to individuals they dislike.”

Free speech, data privacy and decentralization

Conversely, Strossen said, “Plausible arguments can be made that this law violates platform users’ free speech and privacy rights, because the federal government deputizes platforms to monitor and disclose detailed information about their users’ communications.”

“Government can’t do an end-run around constitutional constraints on its own actions by forcing platforms to engage in spying and censorship that the government wouldn’t be permitted to engage in directly.”


Not only would it seemingly require companies to monitor direct messages that they may not otherwise, the bill also discourages the adoption of end-to-end encryption. Such encryption would stop companies from having extensive reach into messages sent by individuals,  which could feasibly make them unable to comply with STAR filings. 

“What that means is that Twitter has to be searching, constantly monitoring your DMs for suspicious stuff,” said Brito. “And then informing on it. That’s problematic for all the reasons you can imagine.”

Read more: Google Down: The Perils of Centralization

Brito says he thinks the reaction among tech companies would actually be to move toward encryption, as Apple and WhatsApp have done, though he doesn’t think the term “private” in the bill is specifically referring to encrypted communications. 

“They’re going to say, ‘All of the communications that we provide on our platforms are end-to-end encrypted and so we can’t see into our customers communications,’” he said. “And then the government’s going to come back by saying, ‘Okay, we need a backdoor then.’ So that’s one thing. The other thing is it’s going to push folks towards decentralization.”

In decentralized systems, there isn’t one centralized body (or company) that can unilaterally decide to adhere to such regulation and begin to surveil users’ communications. 

The impending data deluge: Who is asking for this?

The BSA, from which the thrust of this act borrows heavily, has resulted in compliance officers filing a SAR on anything that might possibly lead to liability for the financial institutions. 

As such, banks have been filing more and more SARs, the number of which has nearly doubled in the last decade. 


As a financial compliance lawyer described in an earlier interview, financial institutions have been doing more defensive SAR filing, turning what was a thoughtful process into something that is more akin to just checking the box. Essentially, the idea is banks are filing large numbers of SARs to protect themselves from liability or being hit with fines for potential noncompliance with the BSA. 

It’s hard to imagine this bill doing anything different, but using STARs instead. 

Brito also raised the point of whether the potential deluge of information is something law enforcement wants. For example, as the number of SARs has risen, FinCEN has shrunk. This means there are relatively few people to analyze all the SARs that come, and potentially place a limit on the quality of the intelligence they’re seeking to gather. 

“Did the sponsors of this bill talk to law enforcement?” he asked. “Because as a result of this they could very well get tens of thousands of reports for whenever anybody uses the word bomb, for example, like ‘that club was the bomb.’ That doesn’t help them and they’re going to have to go through them all.”

This also doesn’t take into account that Facebook and other social media platforms already have compliance teams that work closely with law enforcement on these sorts of issues. Facebook and Instagram report and take down millions of instances of child pornography annually, for example. 

“Who is this meant to cover that isn’t already doing this today?” said Brito.

Squashing competition

For all the consternation around big tech and antitrust legislation being rolled out, yet another side effect of this legislation would be to hamper the ability of other tech companies to compete with the already dominant platforms. 

“As with any such burdensome regulation, another adverse impact would be to further entrench the already dominant online platforms, such as Facebook and Google, and to raise further barriers to entry for new, small companies,” said Strossen, “The giants have the resources to contend with the regulatory requirements, but their potential competitors do not.”


Content moderation itself is a tall task, one that requires resources, systems and attention. Creating additional obstacles, as this bill does, would exponentially increase the upfront costs to getting into the game at all, and provide a myriad number of reasons why someone shouldn’t. 

“This bill, like many that seek to regulate the internet before it, has the indirect effect of hurting small startups and entrepreneurs more than anything,” said Fauvre-Willis. “The more these bills go into action, the greater moat large companies have against small innovators. Facebook and Google can hire lawyers and teams to manage this process if they need to. An early stage company cannot. This has the unintended consequence of stifling innovation as a result.”


Source

Tagged : / / / / / /
Bitcoin (BTC) $ 26,594.13 0.36%
Ethereum (ETH) $ 1,594.84 0.10%
Litecoin (LTC) $ 64.86 0.03%
Bitcoin Cash (BCH) $ 207.18 1.01%