Currently showing: Other


29 Nov 18 18:07

On 18-19 November, I had the privilege of joining researchers from around the world at the 4th NYUAD Transport Symposium in Abu Dhabi to discuss new technologies and approaches in mobility, logistics, and mobility that could pave the way towards intelligent cities that can adapt to rapid urbanization in a digitizing society.

The consequences of this urbanization are immediate and pressing. The number of people living in cities around the world has risen from 751 million in 1950 to 4.2 billion today, and the United Nations estimates that this will increase to 6.7 billion by 2050. This massive increase in urban population, coupled with the continuing globalisation of supply chains and growing interconnectivity of economies, is presenting challenges and opportunities we have never witnessed before, from serving the world's megacities to fighting cyber-crime.

Technology is essential to unravelling the growing complexity of this new urban future and to providing safe and sustainable transport, supply chains, and services to millions of people sharing limited amounts of space.

For example, the vast amounts of data being generated by what is commonly referred to as the Internet of Things, or IoT, coupled with machine intelligence and advances in computing power, are allowing us to gather and process huge quantities of data in real time. This is a boon for re/insurers like Swiss Re as it allows us to develop better models for predicting the frequency and severity of extreme events, from natural catastrophes and financial risks to health risks and the risks arising from the digitization of our societies.

However, these technologies also raise ethical and social concerns. Regulators, executives, and consumers should be wrestling with these material risks as they enjoy the benefits.

Threaded through all these digital conversations is data. I find it useful to analyse data as the new oil. And while many are focused on data extraction and distribution, most are under-investing in data refinement. Many people are gathering loads of data and hiring data scientists to build algorithms, but most aren't focusing on the curation of these data. This circumstance is similar to trying to run a modern car engine, with all its refinement and precision, on dirty, unrefined oil; the result is engine failure. Similarly, dirty data can create problems for machine-intelligence-enabled algorithms. Often noisy data degrades model performance, which will become more apparent to everyone as societies become more dependent on automation. Noisy data and poor algorithms or algorithmic malpractice will continue to create more system vulnerabilities as society becomes more digitized unless researcher, regulators, and companies spend more resources on systematically remedying these inadequacies. We need more standards developed and enforced when it comes to implementing automation. That is, we need more than just automation, we need intelligent automation.

Another overarching concern relates to data governance. If we don't focus on deciding which data we need, who owns the data, and how we share the value from the data, we will see a shift from an oil-based economy with all its dysfunctionalities to a new data-based economy with a whole new set of dysfunctionalities. Building a robust and sustainable digital society requires more than just technology, it needs clear thinking on frameworks for how institutions will need to transform to avoid the pitfalls experienced with
prior societal transformations.

As we automate data processing, a critical question becomes how to best use humans. There are tasks a machine can do much better than us, like crunching vast amounts of information. But there are others where we perform much better than machines, such as critical decision-making when faced with a new problem. How to best augment humans is a core question of our times.

The ethics of algorithmic decision-making is also critical to building fair, resilient cities. As re/insurers, we are facing questions like: "Who is liable when a system fails? e.g., when an autonomously driven car crashes due to algorithmic failure."

New developing technologies are beginning to converge into a digital ecosystem that provides the tools to resolve many of these conflicts and concerns. For example, Distributed Ledger Technologies (DLT) plus IoT can help us better manage processes e.g., tracking of supply chain disruption following a natural catastrophe such as the 2011 Tohuku tsunami in Japan; or tracking vehicle carbon emissions to produce real-time, auditable records of environmental impact.

In the insurance space, these technologies transform the full insurance value chain while providing tools to reduce fraud and speed up insurance pay outs based on pre-established triggers. This can be critical in the case of events like pandemics. Read more about how Swiss Re is working with the World Bank to support communities in the event of an Ebola outbreak.

What can be done today? What are the right regulations given the risks that we face? What are the ethical implications? How can we use current data and models to drive better behaviours and sustainable systems? How do we define ethics in a machine
intelligence world? How much data should companies have access to? Most importantly, how do we use these new technologies and the digital society they produce to improve our lives in a sustainable, responsible way?

We need to work together in Public Private Partnerships, together with research institutions, to figure out the best models and solutions to ensure we incentivize / foster the right development. We want to create large, open access data repositories and work with the scientific community to co-create a better future.


Category: Other

Location: Abu Dhabi - United Arab Emirates


3 Comments

Luz Arnav - 7 Dec 2018, 1:39 p.m.

A human will regularly not pull a trigger, knowing the impacts of this activity, without even a snapshot of examination. A machine will pull a trigger in the event that it is customized to pull triggers. A machine doesn't think about its activities except if you program it to do such, and still, at the end of the day, the machine has an exceptionally constrained field of thought. https://www.essaymania.co.uk/editing-service.php

Gianluca - 8 Dec 2018, 8:55 a.m.

Data first! I believe we should be constantly scanning for valuable new data sources, developing a methodology to identify those sources valuable for problem solving, thinking how to integrate them into existing data lakes and devising new ways to extract insights efficiently.

The new Google Dataset Search (https://toolbox.google.com/datasetsearch) gives a picture of what the global data landscape could look in the near future. Machine Intelligence and econometric methods could be used to rank different data sources by their predictive value.

The monitoring of mega-cities, global supply chain and other large scale ecosystems is a challenging problem. Data from sensory inputs (i.e. Internet of Things) help bottom-up while those from remote sensing technologies (satellites and recently drones) contribute top-down. Recent studies in economics show how Big Indices - as proxies of human activities at city, country or even larger scale - can be built from remote sensing Big Data.

These examples provide insights on how new data sources analyzed with new techniques can help measuring relevant factors at scale.

Mega-cities, global supply chains and cyber are shifting the weight towards systemic risks, characterised by low-frequency/high-severity scenarios, where classical statistical approaches reach their limits.

Machine Intelligence technologies and large-scale optimization methods made a breakthrough in the ability of analysing large amounts of data and are a powerful toolbox to quantify risks in large ecosystems.

The current debate on so called 'black box' approaches is 20-years old and very loud today. Nonetheless, discovering hidden significant patterns in large amounts of real-world, unstructured data is a complex problem. Forcing interpretability 'as a must' is not necessarily a good idea as it could lead to increased bias and terribly over-simplified models.

This being said, human decision making must deal with uncertainty. In these settings, decisions are taken by humans combining both a predictive component and a judgmental one. The future will likely see predictions taken by AI (cheap, accurate and fast). But the judgmental component...that’s for us humans!

Let's stay with mega-cities for a moment. These large scale urban environments are the celebration of networks! They are the underlying infrastructure layer for most of the key services in our societies.
People, goods and information all flow through them. Mobility? It is a flow of people on multi-mode transport networks. Communication? It is the flow of information and insights on computer networks. Logistics & supply chain? The flow of goods over networks. Online communities? The flow of interests over social networks. Organizations' behavior? The flow of human decisions on the set of all networks.

Understanding the dynamics of these flows means understanding how society wants to be served today.

Daniel Martin Eckhart - 8 Dec 2018, 9:12 a.m.

Thanks for your post, Jeff - lots to unpack, lots and lots of food for thought. I like your oil/data comparison - yes, without a doubt, as we now can handle just about every real-time data mountain, we need to get far better at refining/curating. I’m convinced that we will.

The big questions for me will be questions of ethics and vision - in a nutshell, creating the purposeful future of humanity in a tech-driven world. We have shown that we have what it takes to create everything from vaccines to the Empire State Building ... and we have also shown our darkest sides time and time again. Momentous times ... I think that we can be at the forefront of creating that purposeful future.


If you would like to leave a comment, please, log in.