No one gets angry at a mathematician or a physicist whom he or she doesn’t understand, or at someone who speaks a foreign language, but rather at someone who tampers with your own language — Jacques Derrida
Almost every Artificial Intelligence (AI) expert agrees that an AI algorithm is only as good as the dataset it works on. Secondly, the larger, more diverse and more global a dataset is, the better the AI algorithm performs. Thirdly, there is a lot of bias that can be introduced in algorithms by using skewed datasets. This bias can wreak havoc in some cases. However, almost everyone disagrees on what the definition of privacy should be, what level of access should be granted to whom and what constitutes good data. Normally, centralization is a sign of crisis. In the past, resources have been pooled to respond to industry regulations. For example, the Basel III accord, the COSO CoBit framework etc. are all good cases of standardization and pooling together of resources to deal with regulations. While AI is not yet subject to stringent regulations and rightfully so-strict laws can stifle innovation, this article makes a case for pooling of data resources and harmonization of approaches to make data less private, more accessible and with rules built in. The idea is to create a global, unbiased dataset that can fuel the development of AI.
The World Economic Forum meets annually to discuss challenges facing the world. This year, a broad swathe of people met to discuss everything from climate leadership to inclusive finance, the future of the world economy to the future of exponential technologies. I have always been intrigued by what the future state of AI would look like.
How Do We Deal With Disparate and Global Efforts At Developing AI Systems?
As I watched the panel below, I could not help but observe some common themes everyone agreed on and a lot of issues that the panel did not agree on. While AI as a concept has old origins, most people understand that its development has happened in waves…