Tuesday, April 10, 2018

Artificial Intelligence - The Sentient Machine by Amir Husain

Excerpt from The Sentient Machine by Amir Husain

It will be some time before augmented reality (AR) of sufficient quality and fidelity hits the mainstream. But when it does-and it will-its effect will be profound. We already struggle with determining what objective truth is, and whether there even is such a thing if all experiences are a function of how they are perceived. Augmented reality will bring this objective reality/perceived reality dichotomy into even sharper focus, because this technology will create customized views of the world for each of us.

What does all of this have to do with artificial intelligence? A lot. Augmented reality will become a way to bring artificially intelligent constructs to life and blend them in with our view of "real" reality. As AR breaks through the barriers of perception and causes our mental compartmentalization of digital and physical to crumble, improvements in AI will ride atop these indescribably fluid experiences and infuse the resulting mixed reality with people, creatures, objects, places, and experiences that are artificially intelligent, or created by Al.

It is important to understand that the idea of augmented reality doesn't require a particular device. The concept applies, at different levels of fidelity, to cell phone cameras and screens, wearable headsets, glasses, projected surfaces, and yes, brain implants. We will likely experience AR in a variety of ways in the future. Surfaces within smart buildings are one such potential point of experience. The possibilities are immense: everything from AI systems that learn based on our preferences and project our favorite art onto walls to the use of light to enhance and even modify the perceived architecture of the room. Ceilings made of light that can shift fluidly from an homage to the Sistine Chapel to a postmodern dance club. Fabric or upholstery that appears to change because it is being projected onto what is, in reality, a bland white sheet.

Many technology companies, including Microsoft, have put together concept videos that show how smart displays and advanced high-resolution, high-luminosity projection capabilities will enable essentially entire rooms-and buildings-to become completely configurable and tunable display surfaces.

....Consider the idea that buildings of the future will have the intelligence to not only sense their occupants but also sense individual intent at the scale of thousands of simultaneous occupants, optimize decision-making to balance the priorities and importance of each of these occupants, and then visually communicate with each individual in a completely personal context.

AI is not just about robots and avatars; it's about structures as massive as the buildings we occupy coming to life. Future buildings-as well as bridges, roads, dams, pipes, and canals will be as much about augmented reality and advanced automation as they will be about steel, concrete, and stone. In the coming years, aesthetics in architecture might well come to mean intelligence and adaptability, not simply static beauty.

As the connectivity of our built environment increases, so, too, does our vulnerability. Earlier in our journey, we investigated network hacking and the different ways our computer systems are being infiltrated by forces both benign and nefarious. But our own bodies are just as vulnerable, if not more so. The human brain evolved with loopholes to aid in our survival and these same loopholes are now being exploited on social media networks. Why are our human minds so vulnerable to the influence of these outside forces and what can we do to protect ourselves?

Before we go on to explore how AI is changing the landscape of social engineering and manipulation en masse, we should first take a moment to explore why our human minds are so vulnerable to these "mind hacks." Today our media landscape is awash in stories about "fake news" and possible "alternative facts."

Journalists and pundits have dubbed our current political landscape as indicative of the "Post Truth" era. Although satirists and sketch comedy shows have had a field day with characters like conspiracy theorist Alex Jones and updates from Breitbart News, the "fake news" strategy is incredibly effective because it goes to the heart of a shortcoming in our ability to reason: we have a security loophole wired into our psychological makeup, a vestige of our earlier days in roaming, close-knit tribes and bands. The human brain goes into lockdown to preserve ideology in the face of an "onslaught of rationality." We would rather believe lies than have the truth dismantle our tribal loyalty. Political researchers Brendan Nyhan and Jason Reifler labeled this loophole "the backfire effect," and their work explores its many manifestations: everything from the antivaccination movement to failed attempts from the media to correct the Obama Muslim myth. Over and over, they found that when the media starts to correct "alternative facts," they alienate their audience. In a series of studies, they concluded that the effect is particularly pronounced with regard to religious and political counterarguments.

This means that leaders, political groups, and advertisers can "hack" into our ideologies, trigger psychological lockdown, and hold us emotionally captive. In his groundbreaking 2011 book Thinking, Fast and Slow, psychologist Daniel Kahneman showed us, from a different angle, additional ways our brains are vulnerable to hacking. He divided our thinking into System 1 Thinking-automatic and involving little energy-and System 2 Thinking-the conscious, deliberate, and labored thinking process. We off-load much of our day's duties to System 1 Thinking, which makes it imminently "hackable." Fast thinking is template thinking, and when the template becomes influenced-biased toward one candidate or another, for example-we automatically enforce that bias each time we take in new information.

We can see our vulnerability in a phenomenon Kahneman identifies as the "anchoring effect." Anchoring might well be called the playbook of any skilled salesman. Take a typical flea market exchange as an example. We take a liking to an antique couch and ask the price. The trader tells us it costs $4,000. We do an instant assessment-deluding ourselves into thinking this is a rational decision-and we immediately refuse. As we are walking away from the stall, the trader tells us he can give us a special deal of $900. Suddenly, the couch, while still expensive, is a deal too good to pass up. With another seemingly rational assessment, we decide we simply have to take advantage of the "onetime" offer. When will another opportunity like this come up again?

Obvious as it seems, the structural underpinning of this type of sales trick is the anchoring effect. We make estimates that appear to be reasonable and objective but they are actually deeply biased by information we have just taken in. In a 1974 experiment, Kahneman and collaborating scientist Amos Tversky asked people to spin a "wheel of fortune" with painted numbers from 0 to 100. The subjects had no idea that the wheel was structured to always land on 10 or 65. When the arrow stopped spinning, they asked participants to estimate how many African nations were part of the United Nations. It's important to note that this was a question-like the price of a charming antique sofa-that most people don't know how to answer. When the wheel stopped on either 10 or 65, they would ask the subject if they believed that the number of countries was higher or lower than the number on the wheel. Then they asked people to estimate the actual number. Because their participants didn't have any idea how to answer the question-few people have this kind of number memorized and available automatically-those with a 10 on the wheel of fortune guessed around twenty-five African nations were in the UN and those with a 65 on the wheel of fortune guessed around forty-five countries. From their perception, the wheel was entirely random and they probably gave the number on it no real conscious thought. What they didn't realize, however, was that the number was "anchoring" them, giving them something concrete against which to make a guess.

Unless you actually have a fact like the number of African countries in the UN at the ready, the anchors surrounding you at any given moment are inevitably influencing your choices. Psychologist Robert Epstein used research founded on this anchoring effect to test how a group of voters in the 2014 election in India might be influenced by online search results. Epstein showed that by putting positive or negative links higher in search results, he and his coauthor could influence how an undecided voter ultimately chose a candidate. Their experiment revealed that a biased search result could increase votes from undecided candidates by 12 percent or more.

But this type of invisible manipulation does not just exist in the political sphere: in something as simple as the latest evolution of the A/B test, we see what is now referred to as "click bait." After more than a decade of Google's iterated advertising model, we have optimized the loopholes in the human brain to an extreme degree. Today, even on our more prestigious and high-end websites, the advertising has evolved into fragments of outlier images because, as A/B testing has revealed, our visual cortex is trained to focus on outliers in the landscape. Once, long ago, this was a skill that allowed us to spot the movements of a tiger or a lion on a savanna. Today, when we pull up a website and see an image we cannot fully decode-a bizarre fruit or an image of only a section of a body part-we are really seeing a precisely calibrated mind hack. Our "fast thinking" system-automatic and habitual-will click on it before we are even aware of our actions.

Al-powered natural language generation (NLG) systems can take this click-bait model even further by composing automatic sentences that feature offers, solicitations, and other provocative things designed to trigger specific actions. Today, technologists are working with AI to dig deep into online behavior patterns-reality mining-in an effort to read what will happen based on what has come before. But in the future, AI will not just read reality, it will write reality.

As we enter an age of AI-enabled strategies, groups and organizations no longer need to limit themselves to predicting elections. They can turn an election. Only a few years ago, we watched in wonder as the Middle East was swept up in the Arab Spring. Tomorrow, AI will enable us to cause the Arab Spring. And we aren't talking about a sentient, artificial general intelligence here. All this mind hacking is possible with today's technology; artificial narrow intelligence and a human user supplying the intent.

....... With the rise of rapid-fire changes caused by AI systems, we will experience countless examples of emergent behavior amplified across our interlinked social, financial, and ecological systems. We can see a microcosm of emergence in the financial markets where algorithmic trading can cascade repeatedly until it is inexorably drawn into a big crash. This is the emergent complexity of any one algorithm in partnership with a complex system. The system is inextricably bound, making it impossible to isolate the behavior of any one entity. Perhaps most disconcerting, human intelligence is not needed to trigger this sort of cascade in any of our networked social structures. It requires little labor, lots of data, some well-programmed algorithms, and tremendous processing power. Humans will be left with a black box of machine learning hacking our minds open. Today we have seen only the very first slivers of this new emerging threat.

How are we to resist a mass manipulation on a scale previously unfathomable in our civil society? My response is that the infinite public space we have created-in the form of the Internet and its networked societies and systems-is a place no human police force could ever come close to monitoring. Will freezing further research on artificial intelligence stop the use of technology that exists today or aid in protecting us from the technology to come? As we enter this new age, where there will be those who abuse AI to further questionable or nefarious agendas, there will also be those who use the same technology to protect society. Hiding behind bans may imperil us. We will soon find that it is only AI that can protect us from Al.

The Sentient Machine: The Coming Age of Artificial IntelligenceThe Sentient Machine: The Coming Age of Artificial Intelligence by Amir Husain
My rating: 4 of 5 stars

The author is pro-AI and makes a convincing case that there is no other way to be at this point. We are already too far down the rabbit hole. However, he does hint at downsides which he uses to build his argument for continuing to develop even more independent AI. As he says in the book, "We will soon find that it is only AI that can protect us from Al."

View all my reviews

No comments:

Post a Comment