The first of its kind in Europe, the Adapt Centre and the NSAI will host the International standardisation committee’s first plenary meeting. It is hoped that this opportunity can be used to really showcase Ireland as an “AI island.”
Ireland is already a tech hub, attracting some of the biggest companies and top talent in the world. This has led to many asking: could Ireland become the Silicon Valley for Artificial Intelligence? In Europe, we are certainly one of the front runners.
To a lot of people, Artificial Intelligence brings forward images of self-driving cars, robots, drones and chatbots. However, it is much more than this as Technology influences our lives in many other ways.
AI can wear a lot of hats across a lot of industries. It is the driving force behind even more automation and is already heavily used in sectors ranging from finance. medicine and marketing. AI permeates many of our standard practices: translation memory, concordance searching, terminology management and QA automation – not to mention machine translation.
AI experts from all over the world will be present, including representatives from some of the world’s leading technology companies including Microsoft, IBM, Google, Huawei, and Fujitsu. Europes first International Plenary meeting will be used to develop the ‘worlds first ever standards’ in AI.
Many topics will be discussed, but there are two topics in particular that AI experts will undoubtedly focus on…
‘Trustworthiness of AI’
This meeting will be the third of its kind, following on from big hitters Silicon Valley and China. The working group of Trustworthy AI is addressing the development of standards to help adopters, users and other stakeholders to establish trust in AI systems.
“As the technologies contributing to AI become more widely accessible, more applications will integrate one or more AI component. The developers of those applications will benefit from open standards for interfacing to, training, monitoring and controlling those components in a way that avoid vendor lock in.” Prof Dave Lewis, Associate director of the Adapt Centre, Trinity College
They will do this by creating transparency, making it verifiable and easier to explain, while also developing approaches that mitigate pitfalls, threats and risks to AI systems. These are crucial to the future of AI, as government and international bodies around the world are highlighting these as critical factors in building trustworthy AI.
Developing AI standards that help protect jobs
In the years past, the most prominent debate for AI experts was the effect that AI could have on unemployment. Will it create more jobs than it destroys?
The answer to this question is simple, it will do both. Current job roles will be re-purposed and staff will be re-trained. Some more traditional job titles will cease to exist, but many more new ones will be created.
So how can the AI community ensure that the productivity and economic growth of AI adoption are spread equally across society?
Developing AI standards will help assess more accurately the full social cost of adoption. In terms of reliability, bias, risk and accountability. The resulting clarity about the distribution of costs and benefits between stakeholders will help society decide on appropriate and equitable redress as inequalities emerge.” Prof Lewis says
The project is said to take at least three years. It is hoped the development of international standards will provide a framework or a form of vocabulary so that all stakeholders can talk to each other.