The Impact of Europe's AI Act on Marketers

CIO Review Europe | Tuesday, June 14, 2022

the EU officially proposed the Artificial Intelligence Act, outlining the ability to monitor, regulate and ban the uses of machine learning technology.

FREMONT, CA: The EU introduced the Artificial Intelligence Act on April 21, defining the capacity to monitor, regulate, and prohibit the use of machine learning technologies. Officials remarked that they strive to invest in and expedite AI adoption across the EU, which will boost the economy while also assuring consistency, tackling global concerns, and developing trust with human users. The AI use-cases are divided into three categories by the Act: unacceptable risk, high risk, and low/minimal risk. Artificial intelligence applications that pose an unacceptable danger will be outright prohibited. These are programmes that infringe on basic human rights, such as:

• Subliminal tactics are used to manipulate people.

• Especially vulnerable groups, such as children, are targeted for exploitation.

• Governmental social scoring

• Law enforcement can use remote biometric identification in public settings in real-time (though exemptions exist).

High-risk apps, meanwhile, represent a significant threat to health, safety, and fundamental rights, although the dispute over what constitutes high risk has raged since last year, with over 300 organisations weighing in. Only if certain protections are in place, such as human control, transparency, and traceability, are these AI applications authorised on the market. These laws, such as GDPR, will have an impact on every company that does business in the EU, not just those that are based there. It isn't just for companies that use high-level AI, such as infrastructure or law enforcement. These rules apply to you if you utilise chatbots to answer questions, machine learning to determine customer attitudes or insights, or any form of content-altering bots. The EU's AI Act has not yet been put into law, and no implementation date has been set. Failure to comply with the AI Act after it is passed could result in monetary penalties or, in the case of high-risk applications, monitoring authorities will have the authority to retrain or destroy AI systems.

Most marketers are not at all concerned about developing high-risk AI systems that are subject to strict government regulation. Some non-high-risk AI systems will also be subject to disclosure requirements, such as those that:

• Interact with people (such as chatbots).

• Acknowledge feelings (including sentiment).

• Biometric data is used to categorise people.

• Content creation or manipulation (such as deepfakes).

Data privacy should be the most important message for marketers. When it comes to handling consumer data, brands will face even stricter laws. Many already follow GDPR requirements for collecting, storing, exchanging, and using data. There's been a rush of positive and negative responses online since the EU announced its proposed AI regulations. Some critics argue that the Act's oversight requirements are overly wide for all AI applications, arguing that regulating market-ready products, internet platforms, and vital city infrastructure are all quite different procedures.

Read Also

follow on linkedin follow on twitter

Copyright © 2022 CIOReviewEurope. All rights reserved.         Contact         |         Subscribe        

Top