[ad_1]
The European Commission this week opened its new artificial intelligence (AI) office, which will help set policy for the bloc while also serving as a “global reference point,” according to officials.
“The European AI Office will support the development and use of trustworthy AI, while protecting against AI risks,” the commission wrote in a statement published on its website. “The AI Office was established within the European Commission as the center of AI expertise and forms the foundation for a single European AI governance system.”
“The AI Office also promotes an innovative ecosystem of trustworthy AI, to reap the societal and economic benefits,” the committee said. “It will ensure a strategic, coherent and effective European approach on AI at the international level, becoming a global reference point.”
The Commission presented its package for AI strategy in April 2021, aiming to turn the European Union (EU) into a “world-class hub for AI and ensuring that AI is human-centric and trustworthy.”
GOOGLE ‘WORKING AROUND THE CLOCK’ TO FIX ‘UNACCEPTABLE’ GEMINI AI, CEO SAYS
The new office will work mainly to coordinate policy between its member states and support their own governance bodies – a key point of the Bletchley Park agreement signed last year during the world’s first AI safety summit.
The Bletchley Declaration, signed by 28 countries including the United States, China and the United Kingdom, focuses on two main points: Identifying AI safety risks and “building respective risk-based policies across our countries to ensure safety in light of such risks.”
Safety in the development and use of AI has remained a central issue for debate and policy since the public first latched onto the potential of the technology to transform
To get a handle on controlling that development led the European Commission to launch an AI innovation package, including the GenAI4EU initiative, which will support startups and small and midsize enterprises to ensure any new AI project “respects EU values and rules.”
European Commission President Ursula von der Leyen in a State of the Union address announced a new initiative to make Europe’s supercomputers available to innovative European AI startups and launched a competition to provide €250,000 (roughly $273,500) prize money to companies who develop new AI models under an open-source license for non-commercial use or must publish research findings.
Competing to lead the way in AI does not just mean staying at the cutting edge of tech development. AI safety policy has proven a competitive area for nations jockeying to establish themselves at the lead of the industry.
NEW TEXT-TO-VIDEO AI MODEL HAS CREATIVE POTENTIAL BUT NEEDS ‘EXTREME ACCOUNTABILITY’
The U.S. established the U.S. Artificial Intelligence Safety Institute under the National Institute of Standards of Technology following the safety summit, looking to “facilitate the development of standards for safety, security, and testing of AI models,” among other tasks.
Europe has followed suit and released the EU AI Act, which the commission touts as the world’s first comprehensive law on AI. The European Parliament declared that AI developed within member states should remain “safe, transparent, traceable, non-discriminatory and environmentally friendly.”
CLICK HERE TO GET THE FOX NEWS APP
“AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes,” the Parliament said.
The AI Office will work with a “range of institutions, experts and stakeholders” to accomplish its tasks, including an independent panel of scientific experts to ensure “strong links to the scientific community.”
[ad_2]
Source link