What does the recent boom in AI & ML technologies amidst a global pandemic mean for the future?

10 September 2020 13:44

AI Use Data Privacy and COVID-19


More than six months after the World Health Organization (WHO) officially declared a worldwide pandemic, COVID-19 continues to dramatically affect our society. The latest economic projection published by the International Monetary Fund (IMF) estimates that the global GDP will decline by as much as 4.9 percent in 2020. Nevertheless, ongoing evaluations have shown that the current pandemic has had an immense impact on the use of Artificial Intelligence (AI) and Machine Learning (ML). Around the world,universities, government institutions, companies and individuals have opted for an increased use of these technologies - and the data to fuel them - to support our society in these challenging times.

This current surge of AI-fueled initiatives is helping scientists, government officials and businesses minimize pandemic-related risks while at the same time enhancing the overall economic and social output for society. Among those initiatives we see increased use of smart technology in the healthcare industry, such as contact tracing programmes that minimize the risk of contagion, as well as AI-enhanced natural language processing used to detect and mitigate the spread of false COVID-19 information in social networks.

This influx in innovation is helping to save both livelihoods and lives around the world. However, the practice of using AI and ML technologies comes with growing ethical concerns.

How can we build trust a machine?


Prior to the outbreak of COVID-19,widespread fears over the impact of increased automation using AI and ML on the job market have led to some “tech-lash,”along with concerns over the balance between public health and data protection. While people appear willing to provide some information to government and non-government entities to help fight the spread of the virus, many are wondering if and how they can trust a machine. Overcoming these concerns will require government agencies and companies alike to focus on establishing a transparent and inclusive approach to data protection. This will help emphasize the positive impact AI and ML technologies can have on our society while at the same time building trust in these initiatives and overcoming this “tech-lash”, according to a recent Accenture report.

Before the coronavirus pandemic hit, AI and ML enhanced technologies were already receiving substantial attention, both optimistic and sceptical. Recent reports have highlighted public unease in numerous countries concerning the use of AI-enhanced contact tracing apps. Various civil rights organisations and data watchdogs point to obvious cases of illicit usage of the stored data, particularly in regions where the rule of law faces a continuous battle. Even before the pandemic, governments were adding or strengthening data privacy regulations like the EU General Data Protection Regulation (GDPR) or the California Consumer Privacy Act. With consumer trust and compliance on the line, organisations must work to create a culture for AI success that addresses privacy concerns and identifies reliable sources of data for AI applications to build confidence in the use of advanced technologies.

Get in touch
Reasons to get in touch
  • You can't find an answer to your problem on this website
  • You would like to request training
  • You would like a product demonstration
  • You are having trouble logging in or have a technical problem