Google Sues Unknown Entities for Malware Scheme Disguised as AI Chatbot Bard

Google LLC has initiated legal proceedings against three anonymous individuals, accusing them of orchestrating a sophisticated malware distribution scheme under the guise of offering upgrades to Google’s AI chatbot, Bard. The lawsuit, filed on November 13, 2023, in the Northern District of California, San Jose Division, identifies the defendants as “Does 1-3,” reflecting Google’s current lack of specific identification.

Google alleges that the perpetrators have been exploiting the company’s trademarks, particularly those related to its AI products like “Google, Google AI, and Bard.” By creating misleading social media profiles and pages that mimic Google’s branding, these individuals have reportedly been luring victims into downloading malware. The fraud involves invitations to download free copies of Bard and other AI tools, purportedly from Google.

One striking example provided by Google includes a screenshot of a bogus “Google AI” social media profile used by the con artists. These profiles and pages are designed to deceive users into believing they are interacting with legitimate Google products.

Upon following the provided links, users unwittingly download malware, which is particularly engineered to hijack social media login credentials. This scheme is said to target businesses and advertisers primarily, exploiting their reliance on social media for marketing and communications.

In response to these fraudulent activities, Google has requested the court to grant a comprehensive restraining order and award damages, including attorneys’ costs. The tech giant is also seeking permanent injunctive relief for the harms caused by the defendants, any profits gained from the fraud, and other equitable relief deemed appropriate by the court.

This lawsuit emerges at a time when AI services, especially chatbot services, are experiencing a significant increase in global users. Recent data reveals Google’s Bard bot attracting 49.7 million individual visits each month, while OpenAI’s ChatGPT records over 100 million monthly logins and approximately 1.5 billion unique website visits.

Over the past year, major tech companies like Google, OpenAI, and Meta have been embroiled in various legal disputes. In July, Google faced a class-action lawsuit, underscoring the legal complexities in the rapidly evolving AI and digital services sector.

This case underscores the critical need for heightened digital security measures as AI technology becomes more integrated into everyday digital interactions. Google’s lawsuit not only seeks to safeguard its own intellectual property but also aims to protect unsuspecting users from malicious cyber activities disguised as legitimate AI offerings.

Image source: Shutterstock


Tagged : / / / / / / / / / / / / / / /

U.S. Prosecutors Aim to Sideline Asset Recovery Talks in Bankman-Fried Trial

The U.S. prosecution team has issued a formal request to the United States District Court, Southern District of New York, seeking to preclude the defense from introducing arguments or evidence concerning the current value of specific investments in the trial against Samuel Bankman-Fried, the erstwhile CEO of cryptocurrency exchange FTX. This request follows allegations that Bankman-Fried misappropriated FTX customer deposits for a substantial investment in the artificial intelligence startup Anthropic, an investment whose value might have significantly appreciated amid recent fundraising endeavors by Anthropic.

In a move that captured industry attention, Bankman-Fried invested an approximate amount of $500 million in Anthropic in April 2022. This investment is now under the legal microscope as the prosecution claims it was funded by misappropriated FTX customer deposits. The narrative gains complexity as Anthropic recently disclosed its attempts to secure additional capital from potential investors, including industry behemoths Amazon and Google, at a valuation ranging between $20 billion and $30 billion. The potential appreciation of Bankman-Fried’s investment could play a pivotal role in the recovery of assets for FTX customers and other creditors embroiled in the FTX bankruptcy scenario.

The crux of the prosecution’s argument is to avoid any discussions in court regarding the increased valuation of Anthropic, suggesting that such discussions are aimed at proposing that FTX customers and other victims could eventually be compensated for their losses – a notion previously tagged by the court as an “impermissible purpose.” The government remains steadfast in its position that the focus should remain on the alleged wire fraud, emphasizing that the potential profitability of the investments made with misappropriated funds is immaterial to the charges being deliberated upon.

As the trial unfolds, both legal teams have been engaged in dialogues concerning various evidentiary matters. The prosecution’s current request underscores its intent to maintain a trial ambiance focused solely on the alleged misappropriation and fraud, devoid of financial technicalities concerning the current or future value of assets entwined in the legal proceedings. This development encapsulates a broader narrative pitting the quest for justice for the alleged victims against the financial ramifications stemming from the defendant’s actions.

Image source: Shutterstock


Tagged : / / / / / / / / /

Google Updates Advertising Policy on NFT Gaming

Google has clarified its stance on advertising for blockchain-based games involving non-fungible tokens (NFTs). Effective from September 15, 2023, the tech giant announced,

Beginning September 15, 2023, advertisers offering NFT games that do not promote gambling-related content may advertise those products and services when they meet the following requirements.

The revised policy stipulates that advertisers can promote NFT games that allow players to purchase in-game items, such as “virtual apparel for a player’s characters, weaponry, or armor with better stats, consumed or used in a game to enhance a user’s experience or aid users in advancing the game.” However, Google explicitly mentioned that for NFT games, the following are not allowed:

Games that allow players to stake NFTs in exchange for fungible tokens such as cryptocurrencies” and “Simulated casino gambling (for example, poker, slots or roulette) that offer the opportunity to win NFTs.

To advertise gambling-related content that incorporates NFTs, advertisers must adhere to Google’s “Gambling and games policy” and, as Google points out, they need to “receive the proper Google Ads certification.”

This policy update is a significant shift from Google’s previous stance. In 2018, Google had imposed a blanket ban on all cryptocurrency-related advertising across its platforms. The ban was later relaxed in June 2021, allowing certain companies to advertise.

Reiterating the importance of compliance, Google stated,

As a reminder, we expect all advertisers to comply with the local laws for any area that their ads target. This policy will apply globally to all accounts that advertise these products.

Importantly, Google also clarified the consequences of violations, noting that 

Violations of this policy will not lead to immediate account suspension without prior warning. A warning will be issued, at least 7 days, before any suspension of your account.

This policy revision aims to provide clearer guidelines for advertising blockchain-based games with NFTs, reflecting the evolving landscape of the digital gaming and cryptocurrency sectors.

Google has now been accepting crypto and NFT related products. As reported by Blockchain.News on July 12, 2023, Google Play Store announced that video game publishers can now sell NFT games on its platform. Joseph Mills, the store’s group product manager, highlighted this move towards integrating blockchain in mainstream gaming. The new policy mandates developers to clearly indicate any blockchain-based elements within their apps on the Play Console when offering tokenized assets.

Disclaimer & Copyright Notice: The content of this article is for informational purposes only and is not intended as financial advice. Always consult with a professional before making any financial decisions. This material is the exclusive property of Blockchain.News. Unauthorized use, duplication, or distribution without express permission is prohibited. Proper credit and direction to the original content are required for any permitted use.

Image source: Shutterstock


Tagged : / / / /

Google DeepMind Introduces SynthID to Combat Deepfakes and AI-Generated Imagery

Google DeepMind has unveiled SynthID, a watermarking technology aimed at identifying AI-generated images. The tool embeds an “imperceptible” watermark into the pixels of images generated by AI, making them easily identifiable by specialized detection tools. The technology is initially available to Google Cloud customers using the Vertex AI platform and the Imagen image generator.

As the 2024 election season looms in the U.S. and the U.K., the issue of deepfakes and AI-generated content has gained heightened attention. Google DeepMind’s CEO, Demis Hassabis, emphasized the increasing importance of systems that can identify and detect AI-generated imagery. “Every time we talk about it and other systems, it’s, ‘What about the problem of deepfakes?’” Hassabis said in an interview with The Verge on August 29, 2023.

How SynthID Works

SynthID works by embedding a watermark directly into the pixels of an AI-generated image. According to Hassabis, this watermark is “

robust to various transformations — cropping, resizing, all of the things that you might do to try and get around normal, traditional, simple watermarks.

The watermark does not alter the quality or the experience of the image but makes it easily detectable by DeepMind’s tools.

Availability and Future Plans

Initially, SynthID is being rolled out to a limited number of Vertex AI customers using Imagen, one of DeepMind’s latest text-to-image models. The technology was developed in collaboration with Google Research and is part of Google Cloud’s broader strategy to offer tools for creating AI-generated images responsibly.

Thomas Kurian, Google Cloud’s CEO, noted that the Vertex AI platform is experiencing rapid growth, making it an opportune time to launch SynthID. “The models are getting more and more sophisticated, and we’ve had a huge, huge ramp in the number of people using the models,” Kurian said.

Broader Implications

While SynthID is not the first tool designed to combat deepfakes, it joins a growing list of initiatives from major tech companies like Meta, Microsoft, and Amazon. These companies are increasingly investing in content verification solutions to maintain digital integrity. However, SynthID is not a “silver bullet to the deepfake problem,” according to Hassabis. The technology is still in its beta phase, and Google plans to refine it based on real-world testing and user feedback.

Image source: Shutterstock


Tagged : / / / / / /

Google Takes Concerted Steps to Conform to EU’s Digital Services Act

Google is actively adapting its services to meet the European Union’s Digital Services Act (DSA), enacted on November 16, 2022. The DSA  targets platforms and search engines with more than 45 million monthly users in the EU. These are categorized as “very large online platforms” (VLOPs) or “very large online search engines” (VLOSEs). According to EU guidelines, such entities, including Google Maps, Google Play, and Google Shopping for VLOPs, and Google Search for VLOSEs, have a four-month window to comply with the DSA.

In a blog post, Google outlined its adaptations to meet the DSA’s specific requirements. “We have made significant efforts to adapt our programs to meet the Act’s specific requirements,” the company stated.

These efforts include:

Ads Transparency Center Expansion: Google will “be expanding the Ads Transparency Center, a global searchable repository of advertisers across all our platforms, to meet specific DSA provisions and providing additional information on targeting for ads served in the European Union.”

Data Access for Researchers: Google is committed to “increase data access for researchers looking to understand more about how Google Search, YouTube, Google Maps, Google Play and Shopping work in practice.”

Content Moderation Transparency: Google is “making changes to provide new kinds of visibility into our content moderation decisions and give users different ways to contact us.”

The DSA mandates that designated VLOPs and VLOSEs must “establish a point of contact, report criminal offenses, have user-friendly terms and conditions, and be transparent as regards advertising, recommender systems or content moderation decisions.” They are also required to “identify, analyse, and assess systemic risks that are linked to their services,” including risks related to “illegal content, fundamental rights, public security, and electoral processes.”

While Google is taking steps to comply, the company has also expressed reservations. According to the EU’s guidelines, “The designation triggers specific rules that tackle the particular risks such large services pose to Europeans and society when it comes to illegal content, and their impact on fundamental rights, public security, and wellbeing.”

The DSA, along with its sister regulation, the Digital Market Act (DMA), aims to create a safer digital space and establish a level playing field to foster innovation, growth, and competitiveness. Other VLOPs include Alibaba Aliexpress, Amazon Store, Apple AppStore, and so on, while Bing also falls under VLOSEs.

In summary, the Digital Services Act represents a significant regulatory milestone in the European digital landscape. As Google and other tech giants navigate the complexities of compliance, the broader implications for users and the digital ecosystem are yet to unfold. With the DSA set to be directly applicable across the EU from January 1, 2024, or fifteen months after its entry into force, whichever comes later, the clock is ticking for platforms to align their operations accordingly.

Image source: Shutterstock


Tagged : / / /

Google Introduces Generative AI Features to Enhance Search Experience

On August 15, Google unveiled a transformative series of updates to its iconic search engine. These changes, rooted in advanced generative AI technologies, are set to redefine the paradigms of online content discovery and comprehension.

The tech giant’s commitment to innovation is evident in the enhancements made to the Search Generative Experience (SGE), a feature that had its initial beta launch earlier in 2023.

The realm of programming, both for newcomers and seasoned professionals, is ever-evolving. Recognizing the challenges and the continuous learning curve associated with coding, Google’s SGE now offers AI-generated overviews tailored for a multitude of programming languages and tools. These aren’t just basic summaries; they’re also intended to provide practical advice, solutions to frequently asked how-to queries, and even code samples for typical activities.

This update’s inclusion of color-coded syntax highlighting is among its most prominent features. Google hopes to make the code more legible and intelligible by separating code components like variables, keywords, and comments using unique colours, hence lessening the cognitive strain on developers.

But Google’s ambitions with generative AI don’t stop at coding. With the proliferation of information on the internet, navigating through vast amounts of data has become a challenge for many. Addressing this, Google, under its Search Labs initiative, has rolled out an experimental feature named “SGE while browsing.”

Although it’s currently available on the Google app for Android and iOS, plans are underway to introduce this feature to Chrome on desktop platforms. The primary goal is to revolutionize the way users engage with long-form content on the internet. By offering an AI-generated list of key points on selected web pages, users can quickly grasp the essence of articles. The “Explore on page” option is another gem, allowing users to identify and jump to specific sections that answer particular questions, making the process of information retrieval both efficient and user-centric.

Yet, as with most technological advancements, Google’s foray into deeper AI integration has its detractors. Some researchers and tech pundits have expressed reservations, suggesting that an over-dependence on AI-curated search results might inadvertently stifle individual critical thinking and independent thought processes. This debate underscores the broader challenges of balancing AI assistance with human autonomy in the digital age.

In tandem with these developments, Google’s recent update to its privacy policies on July 1 is also noteworthy. The revised policies grant Google the latitude to utilize publicly available data more extensively for AI training, signaling the company’s unwavering focus on refining and expanding its AI capabilities.

In conclusion, Google’s latest updates, while promising a more streamlined and enriched user experience, also open up discussions on the ethical and practical implications of AI’s pervasive role in our digital interactions.

Image source: Shutterstock


Tagged : / / /

Breaking: Tech Giants Unite – OpenAI, Anthropic, Google, DeepMind, and Microsoft Launch Frontier Model Forum for Safer AI Development

OpenAI, in collaboration with Anthropic, Google, DeepMind, and Microsoft, announced the formation of the Frontier Model Forum on July 26, 2023. This new industry body aims to ensure the safe and responsible development of future hyperscale AI models.

The Frontier Model Forum is a response to the shared understanding among governments and industry that, while AI holds tremendous promise for global benefit, it also necessitates appropriate safeguards to mitigate potential risks. This initiative builds upon the efforts already made by the US and UK governments, the European Union, the OECD, and the G7 through the Hiroshima AI process, among others.

The Forum will focus on three key areas over the coming year:

  1. Identifying best practices: The Forum aims to promote knowledge sharing and best practices among industry, governments, civil society, and academia, particularly focusing on safety standards and practices to mitigate a wide range of potential AI risks.
  2. Advancing AI safety research: The Forum will support the AI safety ecosystem by identifying crucial open research questions on AI safety. It will coordinate research to advance efforts in areas such as adversarial robustness, mechanistic interpretability, scalable oversight, independent research access, emergent behaviors, and anomaly detection. An initial focus will be on developing and sharing a public library of technical evaluations and benchmarks for frontier AI models.
  3. Facilitating information sharing among companies and governments: The Forum plans to establish trusted, secure mechanisms for sharing information among companies, governments, and relevant stakeholders regarding AI safety and risks, following best practices in responsible disclosure from areas such as cybersecurity.

Kent Walker, President of Global Affairs at Google & Alphabet, expressed his excitement about working together with other leading companies, sharing technical expertise to promote responsible AI innovation.

Brad Smith, Vice Chair & President at Microsoft, emphasized that companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control.

Anna Makanju, Vice President of Global Affairs at OpenAI, stressed the importance of oversight and governance in AI development.

Dario Amodei, CEO at Anthropic, highlighted the potential of AI to fundamentally change how the world works and the vital role of the Frontier Model Forum in coordinating best practices and sharing research on frontier AI safety.

The Frontier Model Forum represents a significant step in the tech sector’s collective effort to advance AI responsibly, addressing challenges so that AI benefits all of humanity.

Image source: Shutterstock


Tagged : / / / / / /

Apple Joins GPT AI Race Following OpenAI and Google

Apple is reportedly developing its own generative pre-trained transformer (GPT) artificial intelligence (AI) model. The model is internally referred to as “Ajax” or “Apple GPT”. But there are no clear indications that the company plans to launch it to the public, keeping the tech community in anticipation.

The Ajax system is said to be similar to OpenAI’s ChatGPT and Google’s Bard, two leading AI models in the industry. Interestingly, Ajax was developed on top of Google Jax, a machine learning framework, and is currently running on Google Cloud. This connection to Google’s technology could potentially limit Apple’s ability to scale Ajax beyond internal testing due to cost considerations, posing a challenge for the tech giant.

Apple’s approach to AI has always been privacy-focused, a stance that has earned it a loyal user base. The company’s efforts are primarily centered on AI technologies that can run using onboard processors instead of cloud-based services. This approach is in line with Apple’s commitment to user privacy and data security. If Apple were to develop a GPT model capable of running discretely on iPhone hardware, it could offer significant benefits to users who value privacy, potentially revolutionizing the AI space.

Despite Apple’s lack of presence in the chatbot space, the company is a significant player in AI. The AI powering the iPhone’s camera and photography editing suite remains cutting edge, demonstrating Apple’s commitment to integrating AI into its products. Furthermore, Apple Research consistently outputs a steady stream of significant papers in the machine learning space, contributing to the advancement of AI technology.

Multiple teams within Apple are working on the Ajax project. These teams are addressing potential privacy implications, a crucial aspect of AI development. The report also mentions that the company has been using the Ajax-powered chatbot internally, suggesting that Apple is actively testing and refining the technology.

Apple’s most famous AI system is Siri, the voice assistant. To bolster its AI efforts, Apple hired John Giannandrea, who previously headed up AI and search at Google, in 2018 to oversee Siri and its machine learning teams. This strategic hire underscores Apple’s commitment to advancing its AI capabilities.

Apple is planning to make a “significant AI-related announcement” sometime in 2024. However, the company is in no rush to figure out the application of this technology. It is still determining how the technology will integrate into its larger portfolio. This measured approach reflects Apple’s commitment to delivering high-quality, user-friendly products and services.

As the AI race heats up among tech giants, Apple’s entry into the GPT AI space marks a significant step. It will be interesting to see how Apple’s AI developments unfold in the coming years, and how it will shape the future of AI technology. With its focus on privacy and quality, Apple’s advancements in AI could potentially redefine the industry standards.

Image source: Shutterstock


Tagged : / / / / /

Google, UK, FTX and Binance in Crypto News

In the latest crypto news, Google has expanded its Web3 program by adding 11 blockchain partners to its Google for Startups Cloud Program. The program will provide expertise, grants, and services to emerging Web3 entrepreneurs. The UK government has also allocated $125 million to establish an AI task force aimed at promoting the country’s sovereign capabilities, such as public services, and fostering the adoption of safe and reliable AI foundation models. On the other hand, FTX has agreed to sell its LedgerX futures and options exchange and clearinghouse to M7 Holdings for $50 million, while Binance.US has backed out of its $1 billion Voyager asset purchase due to the “hostile and uncertain regulatory climate in the United States.”

In more detail, Google has partnered with 11 Web3 blockchain firms, such as Alchemy, Polygon, Celo, and Hedera, to expand its Google for Startups Cloud Program. As part of the program, pre-seed Web3 startups can receive up to $2,000 in Google Cloud credits valid for two years, while seeded startups can access $200,000 over two years for Google Cloud and Firebase usage. Additionally, blockchain partners are offering grants of up to $3 million to seeded companies in the program. Nansen, a blockchain analytics company, has also partnered with Google Cloud to provide real-time blockchain data for startups.

Meanwhile, the UK government has launched an AI task force to accelerate the country’s readiness for AI. The task force will focus on promoting sovereign capabilities, such as public services, and fostering the adoption of safe and reliable AI foundation models. The task force aims to launch its first pilots of AI usage and integration targeting public services in the next six months. The UK is committed to becoming a science and technology superpower by 2030 and is pushing for “safe AI” that regulates technology to “keep people safe” without limiting innovation.

In terms of cryptocurrency exchanges, FTX has agreed to sell its LedgerX futures and options exchange and clearinghouse to M7 Holdings for $50 million. The deal is subject to approval from the US Bankruptcy Court for the District of Delaware, which is scheduled to hear the case on May 4. FTX purchased LedgerX in August 2021 to expand its spot trading services, and the sale is part of FTX’s efforts to monetize assets and deliver recoveries to stakeholders.

On the other hand, Binance.US has backed out of its agreement to purchase bankrupt cryptocurrency brokerage Voyager Digital’s assets for $1 billion, citing the “hostile and uncertain regulatory climate in the United States.” The Voyager Official Committee of Unsecured Creditors expressed its disappointment at the news and said it was investigating potential claims against Binance.US. Voyager and the creditors’ committee will now work on distributing cash and crypto to customers directly via the Voyager platform.

In conclusion, the crypto world has seen significant developments this week, from Google expanding its Web3 program to the UK government allocating funding for an AI task force. FTX is set to sell LedgerX, and Binance.US backs out of the Voyager asset purchase. The industry remains dynamic and unpredictable, with companies and governments adapting to the ever-changing regulatory environment.


Tagged : / / / / / / / / /

Google Merges Teams to Form Google Deepmind for AI Breakthroughs

Google has announced the formation of a new business unit, Google DeepMind, aimed at advancing the development of artificial intelligence (AI) in a safe and responsible manner. The unit is the result of a merger between Google’s Brain team and London-based AI company DeepMind, which Google acquired in 2014.

According to Google and Alphabet CEO Sundar Pichai, the merger is intended to “significantly accelerate our progress in AI.” By combining Google’s AI talent into one focused team, Pichai hopes to create breakthroughs and products that can shape the future of AI.

To achieve this goal, Pichai has appointed Chief Scientist Jeff Dean to lead the development of powerful, multimodal AI models. Dean, who will report directly to Pichai, has been tasked with building a team that can accelerate the company’s progress in AI and create products that are both safe and responsible.

The new business unit, Google DeepMind, will focus on developing AI technologies that can be applied to a wide range of industries, from healthcare and finance to transportation and communication. This includes developing algorithms that can improve the accuracy of medical diagnoses, predicting stock prices with greater accuracy, and enhancing the capabilities of self-driving cars.

One of the key challenges in AI development is ensuring that the technology is used in a safe and responsible manner. This includes ensuring that AI models are not biased or discriminatory and that they are transparent and explainable. Google DeepMind will be dedicated to developing AI technologies that are both safe and transparent, with the goal of promoting the responsible use of AI across industries.

In addition to developing new AI breakthroughs and products, Google DeepMind will also focus on training the next generation of AI experts. This includes partnering with universities and research institutions to provide training and education in AI technologies.

The formation of Google DeepMind is part of Google’s broader strategy to become a leader in AI development. The company has been investing heavily in AI research and development over the past decade, and has already made significant breakthroughs in areas such as natural language processing and computer vision.

By creating a dedicated business unit for AI development, Google hopes to accelerate its progress in this field and create breakthroughs and products that can have a significant impact on society. With Jeff Dean at the helm, Google DeepMind is poised to become a leading force in AI development and shape the future of the industry for years to come.


Tagged : / / / / /
Bitcoin (BTC) $ 44,006.79 1.76%
Ethereum (ETH) $ 2,376.07 0.79%
Litecoin (LTC) $ 78.05 4.83%
Bitcoin Cash (BCH) $ 255.57 3.32%