Spotify removes AI-generated music

The ongoing battle between the music industry and artificial intelligence (AI) continues as Spotify removes AI-generated music. According to a report by the Financial Times (FT), the music streaming platform has removed 7% of songs created by the AI music startup Boomy, amounting to “tens of thousands” of songs. Spotify is also said to be increasing its policing of the platform in response to the situation.

This action by Spotify comes after the platform and other streaming services received complaints of fraud and clutter on the platform. The music industry giant Universal Music Group (UMG) alerted streaming service providers of “suspicious streaming activity” on Boomy tracks, according to FT sources. UMG’s alert led to the removal of the Boomy songs due to the suspected “artificial streaming” of bots posing as listeners, said Spotify. The company also commented, “Artificial streaming is a longstanding, industry-wide issue that Spotify is working to stamp out across our service.”

Representatives from Boomy said the platform is “categorically against” all manipulation or artificial streaming of any kind. However, Lucian Grainge, CEO at Universal Music Group, commented to investors that “The recent explosive development in generative AI will, if left unchecked, both increase the flood of unwanted content on platforms and create rights issues with respect to existing copyright law.”

Last month, UMG emailed streaming services, including Spotify, to block AI services from accessing music catalogs for training purposes. UMG has also sent requests “left and right” to remove AI-generated songs from platforms.

While music industry giants are fighting to control AI-generated content, some artists like Grimes are championing its use. The musician permitted creators to use her voice and be a “guinea pig” for AI music creation, as long as a small set of rules were followed and royalties were split.

In conclusion, the issue of AI-generated music and fraudulent streaming activities is a long-standing problem in the music industry. As the technology continues to develop, it is likely that music industry giants and streaming services will increase their policing of the platform. However, some artists are embracing AI as a new way to create music, and it will be interesting to see how the industry adapts to this new technology in the coming years.

Source

Tagged : / / / / /

Temasek invests in algorithmic currency system

Temasek, the government-owned investment firm of Singapore, has recently invested $10 million in Array, a developer of an algorithmic currency system based on smart contracts and artificial intelligence. Array has announced that the funding round marks its second, raising its valuation to over $100 million.

The new Temasek-backed algorithmic currency system is aimed at providing a more stable, efficient, and scalable asset than traditional cryptocurrencies like Bitcoin. The system is expected to have a variety of use cases, including payments, remittances, and investments.

Array’s smart contract platform, ArrayFi, is designed to enable decentralized applications built on top of its network and driven by its proprietary AI algorithm ArrayGo. ArrayGo operates independently, without any human or institutional control, and is triggered solely by market actions. According to a Medium blog post by the Array team, a traditional bonding curve is implemented manually to ensure the value of the token remains stable and predictable for investors and traders indefinitely. The bonding curve is implemented into a smart contract that governs the issuance and trading of the native token Ara (ARA). The smart contract platform also aims to protect Array users against “pump and dump” schemes.

Temasek’s investment in Array comes several months after the Singapore government admitted that the company suffered reputational damage due to investing in the collapsed crypto exchange FTX. In November 2022, Singapore’s Deputy Prime Minister Lawrence Wong argued that Temasek suffered more than just financial losses due to investing in FTX.

Despite suffering significant losses, Temasek continues to invest in cryptocurrency projects. In April, Temasek also participated in a $10 million series A round for the United States-based impact-verification and intelligence firm BlueMark.

Temasek is fully owned by the Ministry of Finance but operates independently. The company was forced to write down its entire $275 million FTX investment, which accounted for just 0.09% of Temasek’s $403 billion portfolio as of March 2022. Despite this loss, Temasek continues to be a major investor in various industries, including technology, healthcare, and financial services.

In conclusion, Temasek’s investment in Array highlights the company’s continued interest in cryptocurrency projects despite significant losses in the past. The new algorithmic currency system developed by Array aims to provide a more stable and efficient asset than traditional cryptocurrencies, with a variety of use cases such as payments, remittances, and investments. With the backing of Temasek, Array’s valuation is set to exceed $100 million, indicating significant potential for growth and development in the future.

Source

Tagged : / / / / /

EU Drafts AI Bill to Address Copyright Concerns

Concerns over the usage of copyrighted material have risen to the forefront as the use of artificial intelligence (AI) in the production of content becomes more commonplace. In response to these concerns, legislators in the European Union have approved a draft law with the intention of regulating both the firms that produce the technology and the technology itself.

The law, which is a component of the Artificial Intelligence Act of the EU, intends to categorize AI technologies according to the amount of danger they pose. The risk categories range from acceptable to unacceptable, with unacceptable being the highest. The use of high-risk instruments won’t be completely outlawed, but rather they’ll be subject to more stringent disclosure rules. It will soon be necessary for generative AI tools such as ChatGPT and Midjourney, among others, to report any usage of copyrighted resources made in the course of their AI training.

During the subsequent phase of debates among the legislatures and member states, the particulars of the law will be refined to their final form. According to Svenja Hahn, a member of the European Parliament, the bill in its current form strikes a balance between excessive levels of monitoring and excessive levels of regulation. This balance protects people while also encouraging innovation and contributing to economic growth.

The data watchdog for the European Union has voiced worry about the possible difficulties that artificial intelligence (AI) businesses in the United States may have if they do not comply with the General Data Protection Regulations.

Additionally, the European think tank known as Eurofi, which is comprised of organizations from both the public and private sectors, has published a magazine that features an entire section devoted to the applications of AI and machine learning in the financial sector of the EU. All of the mini-essays featured in this section touched on the forthcoming Artificial Intelligence Act in some way. They were on the topic of artificial intelligence (AI) innovation and regulation inside the EU, namely for usage in the financial sector.

One of the authors, Georgina Bulkeley, who is also the director for EMEA financial services solutions at Google Cloud, stressed the significance of AI regulation by stating that the technology is “too vital not to regulate. In addition to this, it is of insufficient significance to not properly regulate.”

In general, the proposed legislation represents a substantial advance toward the goal of regulating the use of AI and works protected by copyright in the EU. As the technology continues to improve and become more widespread in a variety of sectors, it is essential to ensure that it is used in a transparent and ethical manner in order to safeguard both customers and companies.

Source

Tagged : / / / / /

Bitget Pledges $10 Million for Fetch.ai Ecosystem

A big cryptocurrency derivatives exchange known as Bitget just just made the news that it will be spending $10 million towards the development of an ecosystem that is known as Fetch.ai. An artificial intelligence agent network is the primary offering made available by Fetch.ai, a startup that specializes in the provision of infrastructure for autonomous service providers. This network makes it possible for decentralized and autonomous agents to carry out a range of tasks, ranging from simple data processing to complex financial modeling. In addition, the smart wallet that is provided by Fetch.ai includes automation and interaction with OpenAI’s ChatGPT API. This API was first introduced in January 2023 and has amassed a user base of one hundred million users in only a few short months after its debut.

The investment that Bitget has made in the AI infrastructure provider Fetch.ai has two main goals: one is to contribute to the firm’s continued growth and the other is to encourage the extension of commercial partnerships that the company already has. Bitget cited the recent AI buzz that was generated by OpenAI’s ChatGPT as evidence that the technology has the potential to increase human productivity and creativity. As part of the partnership, Bitget will provide marketing consultancy and other services to Fetch.ai in order to aid the firm in growing its clientele.

According to CoinGecko, Bitget is now the ninth largest cryptocurrency spot exchange in the world, with a daily transaction volume of $990 million in bitcoin. This information was obtained from Bitget. Bitget, a company that now serves over 8 million consumers, has its headquarters in the Seychelles. The company’s clientele is spread out throughout more than one hundred countries and territories. In April 2023, Bitget was awarded a regulatory license, which cleared the way for the firm to start providing its services to customers in Lithuania. In the previous month, the firm made an investment of $30 million in the multichain wallet provider BitKeep. As a result of this investment, the corporation became the dominant investor in the company.

As a direct consequence of Bitget’s investment, it is projected that Fetch.ai would see increased levels of growth. The company is in the process of increasing the size of its infrastructure so that it can support autonomous services. As a result of Bitget’s financial support, Fetch.ai is in a position to both expand the scope of its commercial relationships and continue the development of innovative products. These are the kinds of problems that can be solved with the help of artificial intelligence.

Source

Tagged : / / / / /

UK to Invest in AI Task Force

The United Kingdom is making significant investments in the development of its technology sector. UK officials recently announced the formation of an AI task force that will receive an initial funding of £100 million to accelerate the country’s readiness for the adoption of artificial intelligence. This task force will prioritize public services and aim to ensure that safe and reliable foundation models for AI use are established.

UK Prime Minister Rishi Sunak expressed his belief in the opportunities that AI presents for economic growth, advancements in healthcare, and security. He stated that by investing in emerging technologies through the task force, the UK can continue to lead the way in developing safe and trustworthy AI and shaping a more innovative economy.

The task force is set to focus on ensuring sovereign capabilities, including public services, and fostering broad adoption of safe and reliable foundation models. The UK has committed to becoming a science and technology superpower by 2030, and this task force’s work will contribute to achieving this goal. The first pilots of AI usage and integration will target public services and are expected to launch in the next six months.

The UK government has already invested £900 million into computing technology, highlighting its commitment to developing its technology sector. Officials in the UK are simultaneously pushing for “safe AI,” which means regulating the technology to keep people safe while promoting innovation.

The country’s science, innovation, and technology secretary, Michelle Donelan, expressed her belief that AI can transform every industry if developed responsibly. She said that this development would ensure that the public and businesses have the trust they need to confidently adopt this technology and realize its benefits fully.

This announcement comes shortly after the UK Treasury announced the revival of its Asset Management Taskforce, which will focus on developing crypto regulation. Coinbase CEO Brian Armstrong will help advise regulators on law and taxation between banks and the fintech industry.

In conclusion, the UK’s investment in the AI task force represents a significant step towards establishing the country as a leader in AI technology. By prioritizing public services and establishing safe and reliable foundation models for AI use, the UK is paving the way for innovation while keeping people safe.

Source

Tagged : / / / / /

Google Merges Teams to Form Google Deepmind for AI Breakthroughs

Google has announced the formation of a new business unit, Google DeepMind, aimed at advancing the development of artificial intelligence (AI) in a safe and responsible manner. The unit is the result of a merger between Google’s Brain team and London-based AI company DeepMind, which Google acquired in 2014.

According to Google and Alphabet CEO Sundar Pichai, the merger is intended to “significantly accelerate our progress in AI.” By combining Google’s AI talent into one focused team, Pichai hopes to create breakthroughs and products that can shape the future of AI.

To achieve this goal, Pichai has appointed Chief Scientist Jeff Dean to lead the development of powerful, multimodal AI models. Dean, who will report directly to Pichai, has been tasked with building a team that can accelerate the company’s progress in AI and create products that are both safe and responsible.

The new business unit, Google DeepMind, will focus on developing AI technologies that can be applied to a wide range of industries, from healthcare and finance to transportation and communication. This includes developing algorithms that can improve the accuracy of medical diagnoses, predicting stock prices with greater accuracy, and enhancing the capabilities of self-driving cars.

One of the key challenges in AI development is ensuring that the technology is used in a safe and responsible manner. This includes ensuring that AI models are not biased or discriminatory and that they are transparent and explainable. Google DeepMind will be dedicated to developing AI technologies that are both safe and transparent, with the goal of promoting the responsible use of AI across industries.

In addition to developing new AI breakthroughs and products, Google DeepMind will also focus on training the next generation of AI experts. This includes partnering with universities and research institutions to provide training and education in AI technologies.

The formation of Google DeepMind is part of Google’s broader strategy to become a leader in AI development. The company has been investing heavily in AI research and development over the past decade, and has already made significant breakthroughs in areas such as natural language processing and computer vision.

By creating a dedicated business unit for AI development, Google hopes to accelerate its progress in this field and create breakthroughs and products that can have a significant impact on society. With Jeff Dean at the helm, Google DeepMind is poised to become a leading force in AI development and shape the future of the industry for years to come.

Source

Tagged : / / / / /

European Commission Launches Research Unit to Investigate Algorithms Used by Big Tech

The European Commission has taken a significant step towards regulating Big Tech by launching a new research unit called the European Centre for Algorithmic Transparency (ECAT). The primary focus of ECAT is to investigate the impact of algorithms made and used by prominent online platforms and search engines such as Facebook and Google. The team will analyze and evaluate the AI-backed algorithms used by Big Tech firms to identify and address any potential risks posed by these platforms.

The European Union’s existing Joint Research Centre will embed ECAT, which conducts research on a broad range of subjects including artificial intelligence. The team will consist of data scientists, AI experts, social scientists, and legal experts. The group’s focus will be to conduct algorithmic accountability and transparency audits, as required by the Digital Services Act, a set of European Union rules enforceable as of Nov. 16, 2022.

AI-based programs are built using a series of complex algorithms, meaning ECAT will also be looking at algorithms that underpin AI chatbots such as OpenAI’s ChatGPT, which some believe could eventually replace search engines. The team will examine the algorithms used by Big Tech firms to ensure that they are transparent and that their operations do not harm users.

According to Thierry Breton, the EU’s internal market commissioner, ECAT will “look under the hood” of large search engines and online platforms to “see how their algorithms function and contribute to the spread of illegal and harmful content.” This move by the European Commission is a significant development in regulating Big Tech firms, and it will ensure that these companies are held accountable for the impact of their algorithms on society.

The development of AI has been a contentious issue, with nearly a dozen EU politicians calling for the “safe” development of AI in a signed open letter on April 16. The lawmakers asked United States President Joe Biden and European Commission President Ursula von der Leyen to convene a summit on AI and agree on a set of governing principles for the development, control, and deployment of the tech.

Tech entrepreneur Elon Musk also expressed his concerns about the development of AI. He argued on an April 17 Fox News interview that AI chatbots like ChatGPT have a left-wing bias and said that he was developing an alternative called “TruthGPT.” This move by Musk highlights the growing concerns about the ethical implications of AI and its impact on society.

In conclusion, the launch of ECAT by the European Commission is a significant development in regulating Big Tech firms. It will ensure that these companies are held accountable for the impact of their algorithms on society, and it will also help to identify and address any potential risks posed by these platforms. The team of experts at ECAT will play a vital role in conducting algorithmic accountability and transparency audits to ensure that the algorithms used by Big Tech firms are transparent and do not harm users.

Source

Tagged : / / / / /

Elon Musk Warns of AI Destructive Potential

Artificial intelligence (AI) has been a hot topic in the tech industry for years, with many researchers and engineers working tirelessly to bring the concept of a generative AI to life. However, some experts have raised concerns over the potential risks associated with AI, including its potential to destroy civilization. One such expert is Tesla and Twitter CEO Elon Musk, who has been vocal about the dangers of AI falling into the wrong hands or being developed with ill intentions.

On March 15, news surfaced that Musk had plans to create a new AI startup, which would undoubtedly stir up even more debate around the topic. Despite his involvement in AI development, Musk has not shied away from acknowledging the potential risks associated with the technology. In fact, he has been one of the most prominent voices warning of its destructive potential.

During an interview with FOX, Musk stated that AI could be more dangerous than mismanaged aircraft design or production maintenance. He stressed that the probability of such an event occurring may be low, but it is non-trivial and has the potential for civilizational destruction. Musk believes that it is critical to have a proactive approach in managing the development of AI technology, to ensure it is used ethically and safely.

Musk’s warnings are not without merit, as there have been instances where AI has been used for malicious purposes. For example, AI-generated deepfakes have been used to spread disinformation and deceive the public. Additionally, the development of autonomous weapons powered by AI has raised concerns about the potential for the technology to be used in warfare and conflict.

To mitigate the potential risks associated with AI, Musk has called for regulation and oversight in its development. He has also advocated for the establishment of ethical guidelines and standards that ensure the technology is developed and used safely and ethically. Furthermore, he has encouraged researchers and engineers to focus on developing AI systems that align with human values, rather than those that prioritize efficiency and productivity over human wellbeing.

In conclusion, the potential risks associated with AI cannot be ignored, and Musk’s warnings should be taken seriously. While the development of AI has the potential to transform industries and improve our daily lives, it is crucial that we approach its development with caution and prioritize safety and ethics above all else.

Source

Tagged : / / / / /
Bitcoin (BTC) $ 26,484.10 0.39%
Ethereum (ETH) $ 1,838.12 0.75%
Litecoin (LTC) $ 89.04 0.68%
Bitcoin Cash (BCH) $ 110.98 0.46%