Post by jameshoff on Mar 14, 2024 18:26:14 GMT 10
Politics versus Technology. In recent decades the latter has won. The latest challenge is on Artificial Intelligence. Last June 14, the European Parliament approved a proposal to regulate it, the AI Act. It provides, among other things, that it must always be declared whether content comes from artificial intelligence. It provides, among other things, the protection of copyright on the data used by AI. The problem is that the speed of Technology, accustomed to overwhelming every obstacle it faces, and the speed of Politics are enormously different. The EU regulation now proceeds through negotiations with the European Commission and the Council. The hypothesis is that this regulation will have a significant impact on the development and use of artificial intelligence within 3-4 years. In the meantime, the technology is already being used, with possible harm to citizen-consumers.
For example, Adiconsum, together with 15 other consumer DY Leads organizations in as many European countries, asks that: 1) Until EU AI law is applicable, the Authorities must investigate to uncover any harm caused and enforce data protection, safety and consumer protection legislation. 2) high-tech companies must comply with existing EU regulations. 3) the relevant agencies must monitor their compliance and impose stringent sanctions in case of non-compliance. RECOGNITION SYSTEMS What does the European Parliament's proposal say? First of all, it intends to ban the use of biometric identification systems in the EU, both "real time" and for "ex post" use (except in cases of serious crimes and prior judicial authorization); all biometric categorization systems that use sensitive characteristics; predictive policing systems (based on profiling, location or past criminal behaviour); emotion recognition systems (used in law enforcement, border management, workplaces and educational institutions); and finally systems that use the indiscriminate extraction of biometric data from social media or CCTV footage to create facial recognition databases.
High risk activities - according to the proposal - are considered to be those that can damage the health, safety, fundamental rights of people or the environment. Furthermore, the AI systems used to influence voters in political campaigns and those used in the recommendation systems of the main social media have also been added. THE PHENOMENON OF DEEPFAKES Parliament wants to impose an obligation on providers of basic AI models to ensure robust protection of fundamental rights. Providers of generative AI models would be subject to strict transparency obligations, including: disclosing that the content was generated by AI and not by humans, so as to mitigate the phenomenon of deepfakes; design their models to prevent the generation of illegal content; publish data summaries with authorized copyrights for training purposes.
For example, Adiconsum, together with 15 other consumer DY Leads organizations in as many European countries, asks that: 1) Until EU AI law is applicable, the Authorities must investigate to uncover any harm caused and enforce data protection, safety and consumer protection legislation. 2) high-tech companies must comply with existing EU regulations. 3) the relevant agencies must monitor their compliance and impose stringent sanctions in case of non-compliance. RECOGNITION SYSTEMS What does the European Parliament's proposal say? First of all, it intends to ban the use of biometric identification systems in the EU, both "real time" and for "ex post" use (except in cases of serious crimes and prior judicial authorization); all biometric categorization systems that use sensitive characteristics; predictive policing systems (based on profiling, location or past criminal behaviour); emotion recognition systems (used in law enforcement, border management, workplaces and educational institutions); and finally systems that use the indiscriminate extraction of biometric data from social media or CCTV footage to create facial recognition databases.
High risk activities - according to the proposal - are considered to be those that can damage the health, safety, fundamental rights of people or the environment. Furthermore, the AI systems used to influence voters in political campaigns and those used in the recommendation systems of the main social media have also been added. THE PHENOMENON OF DEEPFAKES Parliament wants to impose an obligation on providers of basic AI models to ensure robust protection of fundamental rights. Providers of generative AI models would be subject to strict transparency obligations, including: disclosing that the content was generated by AI and not by humans, so as to mitigate the phenomenon of deepfakes; design their models to prevent the generation of illegal content; publish data summaries with authorized copyrights for training purposes.