The Age of AI

The Age of AI

Blinkist Free Daily
Henry Kissinger, Eric Schmidt & Daniel Huttenlocher

1/7

What’s in it for me? A whole new world.

The influence of artificial intelligence (AI) extends far beyond theoretical applications; it is now a tangible part of global civilization, evolving rapidly and touching everything from creative arts to advanced weapon systems. The pace of AI development and its integration into daily life raise critical questions about its governance and ethical use. How society and its leaders navigate AI’s evolution is a defining challenge of our time.

In this Blink, we’ll explore the origins of AI, assess its current capabilities, and ponder its future trajectory. More importantly, we’ll delve into the critical and complex questions that arise in an era increasingly shaped by AI technology.

2/7

From Turing to today

AI has progressed rapidly from theoretical concepts to advanced technologies, like machine learning, that continue shattering our expectations.

It was twentieth-century scientist and philosopher Alan Turing who proposed a pioneering evaluation to ascertain whether machines could exhibit human-level intelligence. His idea was to give machines creative challenges rather than mathematical tests. Turing described an experiment in which a human evaluator would hold written conversations with a computer and another human, without knowing which was which. The evaluator would then judge the machine’s ability to demonstrate human-like responses, reasoning, and emotional intelligence through its natural language capabilities.

This approach placed emphasis on testing a machine’s capacity for tasks like language fluidity, problem-solving, and adapting to new information. Mathematical prowess alone wasn’t enough. Turing’s criteria for intelligence – known as the Turing test – remains influential today, shaping key benchmarks of AI progress beyond speed and accuracy metrics.

Researchers found rule-based code alone couldn’t replicate human cognition. So techniques using neural networks – that is, networks of nodes used for machine learning – were developed in order to enable AI to learn ambiguous, imperfect information just as we do. If you feed an AI enough examples of molecular structures and antibiotic effectiveness, it will predict potent new medicines unforeseen by science. Feed it classic novels and news articles, and it will spit out original, often eerie, stories.

In addition to commanding programs to do exactly as they are told, code has been developed to learn based on training data.

In the field of biology, the protein-folding prediction system AlphaFold demonstrates this well. It achieved accuracy ratings far above existing science by training on DNA data, allowing biologists to validate theories at unbelievable scale.

To do this, scientists utilized huge databases such as the Protein Data Bank, which contains over 170,000 experimentally-determined 3D protein structures. They also used datasets like UniProt, which include millions of protein sequences without known 3D structures. With the combined information from these training sets, AlphaFold learned what properly folded proteins should look like and generalized what proteins hadn’t already been solved. After a lot of refining, this training produced unprecedented results to predict protein folding, and opened possibilities to better understand disease and drug interaction.

As many advantages as there are, AI has its limitations, and these are often revealed through bias and unpredictable mistakes.

Flaws in training data or human oversight can severely skew system outputs, like how Microsoft’s chatbot Tay began spewing racist, sexist language after public interaction. And lack of context when processing information causes AI bloopers no person would make.

Large language models such as ChatGPT work by training the AI system on massive text datasets, allowing it to learn nuance patterns about how language and writing work. The systems predict the most likely next word in a sentence, and build fluent, human-like text. Large models optimize to continue text sequences fluently, not for accuracy or truth. So without oversight, they can confidently generate believable yet completely invented or unethical text, like what we saw in Tay.

It’s therefore crucial that we instill accountability so AI reliably aligns with ethics and social benefit. Countries implement professional certification, standards, and compliance monitoring because the technology’s growth will not slow on its own. But exciting possibilities lie ahead if guided properly with potentially AI lawyers, assistants, even artists, producing astonishing creative works reflecting the diverse beauty of human cultures.

3/7

Global networks

Powerful digital platforms are now intrinsic to modern life, relying extensively on AI to optimize experiences for billions of users worldwide. Services like social media, e-commerce and search engines use algorithms to customize the information and products we see.

This brings immense conveniences but also raises complex questions around accountability. When AI curates content, it can inadvertently promote social divisions or limit academic discourse essential for democracy.

Facebook demonstrates this tension. Its algorithms excel at showing users preferred content and friends to increase engagement in the platform. However, they also enabled sensational misinformation to spread rapidly during recent elections, curating a digital tunnel vision based on outrage and clickbait. Yet, potential solutions also risk censoring voices. The platform has expanded community standards and takedown procedures to combat toxic posts and fake accounts. But mistakes still inevitably occur, perhaps blocking activists or journalists sharing controversial but vital perspectives.

These moderation mechanisms highlight AI’s deficiencies in analyzing nuanced contexts. Machines cannot judge every situation perfectly, especially content involving humor, political dissent and more. Does censoring problematic material limit healthy exposure to disagreeable ideas that builds societal resilience? What moral and intellectual principles do we want our society to live by? And, how can we ensure that AI adheres to these social expectations?

As we rely on increasingly sophisticated and therefore less transparent AI to moderate and shape content, the political and technological ramifications of these choices are still unclear. The pace of advancement is too rapid for many countries to decide if and how AI should be implemented within their culture. This means that AI is imbued with certain social assumptions which are determined by the culture that developed it, rather than the cultures using it.

And governments are not the only players. The example we looked at of Facebook is a good example of how large companies that use AI can impact the geopolitical landscape to maintain a competitive edge. When knowledge is power, we need to ask ourselves who controls the flow of knowledge.

Overall, technological advancements in AI have raised questions as old as human dissent itself. Difference of opinion, misinformation, and perceived truth have always competed in public spaces. But the global scale and sophistication of modern technology supercharge these questions. It’s important that we consider the far-reaching consequences of our decisions around AI if we are going to build a world that makes the most of this incredible technology.

4/7

Security and defense

We’re already witnessing how AI emulates and often surpasses humans in knowledge acquisition and application, but along with exciting promises for the future, darker forces also come into play.

Much like the development of nuclear weapons in the 1940s, AI promises to revolutionize defense technology. Its integration into areas like weapons targeting, drone swarm coordination, and network defense introduces new dangers lacking historical precedent. Unlike crossing the “red line” of no return that deters nations from launching a nuclear attack, AI blurs human control. Moreover, the expansion of attack capabilities severely complicates security strategy.

Let’s look at an example. Autonomous cyber weapons enabled by AI might independently search networks for vulnerabilities after being unleashed. They could potentially adapt their coding to spread further than intended, meaning traditional deterrence models based on perceived escalation risks grow unreliable. If adversarial malware utilizes AI to disguise attacks as system glitches, noticing in time for an appropriate response becomes difficult as well.

Cold War nuclear doctrine relied on clear threat signaling to prevent crises. But ambiguity surrounds these new defense dynamics, and that brings instability to international peace conversations.

Additionally, AI integrated into the strategic side of defense may advise commanders in unintentionally biased ways during complex operations. Current recognition software can struggle to differentiate basic objects when lighting conditions or angles change slightly. Do we trust AI to decide on targets based on facial recognition or behavioral patterns? Who, if anyone, is responsible for the decisions that an AI makes in a combat situation?

Organizations like the Armed Forces Radiobiology Research institute exist to advise the US military on incidents related to radiation, but no equivalent exists to advise them on AI. This means that the largest global military powers are vulnerable, both to misusing AI in the field and to having AI used against them in unpredictable ways.

Defense planning must therefore balance emerging risks with the realities of current security technology. We must look into the future, but also understanding how AI really works in the present. But as this technology emerges, we have the chance to develop ethical norms and cooperations channels internationally, guiding responsible growth.

The bullet doesn’t have to precede diplomacy, as it has done before. Through academic collaboration and political expedience, it’s possible for AI to progress safely without first becoming a threat. But if we don’t have the foresight to act now, we might see AI unsettling the global order in unprecedented ways.

5/7

AI and human identity

As AI demonstrates capacities once considered distinctly human, its advancement raises philosophical questions about technology’s relationship with human identity. Systems now create art, music, literature, and more with increasing sophistication, enabling new creative possibilities. Yet as machines encroach on subjective realms that have long provided us with meaning, risks emerge regarding how we might rely too heavily on synthetic fulfillment.

Children utilizing AI companions may become so comfortable with personalized advice that human variation seems dissatisfying by comparison. And if technology filters information aligning with internal biases, understanding alternative perspectives could diminish too. Innovation could limit a healthy exposure to disagreeable stimuli that builds resilience.

As emerging AI creations also replace roles occupied by people, many of us may struggle with displacement as well. We’re already seeing major disruptions to almost every industry, from creative jobs to technological ones. When is it important for there to be a human behind the paintbrush? What does it mean when a person learns about the human experience from an artwork or a song without a human creator?

Consider emotion-detection algorithms now analyzing facial expressions, vocal tones, and syntax during conversations to infer inner states. Some applications aim to prevent harm by identifying those hiding distresses. The Mapo Bridge in Seoul, South Korea has incorporated an AI suicide prevention system after the landmark became a high-risk location. Using CCTV footage equipped with deep learning video analytics, the AI sensors identify people at risk and alerts emergency services to intervene.

But logically judging people’s subjective states comes with the risk of being seen as dehumanizing, which can turn people away from the idea. We need to strike a balance between how effective these sorts of systems are, so technology augments without isolating human needs. Whilst the suicide rate at Mapo Bridge is reported to have decreased by over 75 percent since the implementation of the AI system, this doesn’t address the underling social climate that lead to such a high suicide rate in the first place.

Like other seismic shifts, if guided conscientiously, AI will reshape conceptions of purpose and actualize tremendous potential. If ethical priorities direct AI’s advancement, improved access and discovery may uplift more often than not. If sufficient boundaries are set, this will protect domains where humans create, explore, and connect without technological dependence. With enough careful foresight, AI can enlighten the journey of human life rather than diminish it. The risks are substantial, but so too are the possibilities.

6/7

AI and the future

When the printing press was invented, for the first time, information became more accessible to the every-man beyond the handwritten manuscripts of the church. Later, the creation of the internet allowed us to find information globally, giving us real time access to news and scientific findings on the other side of the globe.

Like the printing press and internet, AI promises another immense transformation in how we access and understand information. Its emerging capacities for human-like reasoning and creation reveal this potential daily, from medical insights to artistic marvels. Yet with any transition, change can be disruptive too. Already we’re seeing people resent the technology, or us it irresponsibly. Progress demands proactive, ethical guidance to make sure AI helps us more than it harms us.

Those societies who widely adopt AI face choices determining how and when control is granted across contexts. Some fields like environmental monitoring could defer entirely to algorithms tracking ecosystem dynamics undetectable by limited human senses.

Other areas like law enforcement will require human oversight to ensure that constitutional rights remain protected by technologies prone to bias when analyzing identities. In 2020, the Detroit police began to use facial recognition systems to identify criminal suspects. However, multiple racial bias audits found that the software had an error rate as high as 96 percent for identifying African American and Asian faces accurately.

The error came from the AI’s training data, which consisted primarily of white faces. Responsible development for systems like this require deliberate questions about how AI should be used, what biases are built into them, and who is responsible when they fail the community. Following public criticism and lawsuits citing the system’s infringement of civil rights, Detroit police dropped the technology.

But it’s hard to imagine modern life without digital assistants and AI guides. Our continual reliance shapes thought itself, as outside mechanisms take on basic functions. The enlightenment thinkers considered the individual as first amongst political units, defining the purpose of reason as to liberate human potential. When AI filters our access to information based on what it pre-determines is our preference, it constraints our freedom and impacts its usefulness in a democratic society.

This isn’t a condemnation of AI though, as innovation leaves no aspect of civilization untouched. By asking these ethical and moral questions early enough, we can incorporate AI into our society whilst avoiding these pitfalls. Those cultures balancing technology and wisdom will navigate the tumultuous periods ahead with their values intact. And if progress is grounded in ethical priorities, artificial intelligence may yet bolster our shared humanity. The only way we can shape the future is by engaging in the present.

7/7

Final summary

Artificial intelligence has rapidly progressed from narrowly focused systems to flexible technologies like machine learning that continue to exceed our expectations across every domain. However, significant risks emerge alongside limitless possibilities if the development of AI proceeds without ethical guidelines. As AI demonstrates capacities once considered distinctly human, philosophical questions arise around technology’s relationship with human identity and society.

By balancing progress with wisdom; setting boundaries; and prioritizing uplift over disruption, AI will hopefully enlighten life’s journey rather than diminish it. Guiding humanity through this transition conscientiously will ensure that we make the most of this incredible technology.

Report Page