The 21st century will bring changes and challenges unlike any humans have encountered before. Globalism and technological innovations are changing the structures of societies worldwide—and the changes are happening quickly. This book highlights the biggest challenges in the modern world, and it offers advice on making sense of and navigating such transitional times. If people don’t become better informed about the present and participate in shaping the future, the world could have a class of obsolete workers whose jobs have been automated, people could lose their ability to make their own decisions, and nuclear weapons could annihilate much of the world.
Technological innovations are changing the structures of society—from politics to the labor market. If humans are to address these challenges, they’ll need to create new tools and approaches that fit this new context.
For centuries, people have developed political models that fit the political, economic, and social context of the time, and these models provided a story to make sense of the world and an ideal future to work toward. In recent decades, the dominant political story has been liberalism, which promoted individual freedoms—through free trade, low taxes, free elections, peaceful international relations, rights for minority groups, and pro-immigration policies. However, the massive technological innovations are automating jobs, broadening inequality, and altering human behavior in ways that are making the liberal story irrelevant—and people must either adapt an old political model to modern times or create a new one.
Parallel revolutions in infotech and biotech are transforming societies by:
In the 21st century, increasingly sophisticated technology could automate so many jobs that unemployment skyrockets among low-skilled workers. Neurological discoveries and technological innovations will enable machines to do jobs better than people can, because machines are immune to human error and biases.
Large-scale automation will likely cause a net loss in employment, creating a “useless class” of unskilled workers. Some workers will be able to get training in a new set of skills, but technology will continue to change so rapidly that those new skills could also become obsolete a decade later. This could eventually create a post-work society, in which workers face a fight against irrelevance and governments must determine alternative ways to support people.
In addition to threatening jobs, technology threatens human liberty, as algorithms learn so much about people that they gain an immense power to influence and manipulate. Liberalism maintains that everyone has free will to choose how to vote, how to act, and what to buy—but algorithms can make better choices than you do. For instance, Netflix’s algorithm might suggest a movie that fits your tastes better than one you would have picked.
Each decision that algorithms make for you has two effects:
People’s reliance on algorithms can easily snowball to big life decisions, such as where to go to college, which career to pursue, and who to marry.
As technology threatens to create a useless class of unskilled workers and algorithms have the potential to overpower free will, inequality could grow exponentially: On one end of the spectrum will be the useless class, and on the other end will be the wealthy CEOs of tech companies. Making matters worse, biotech innovations could enable wealthy elites to become biologically superior by improving their physical and cognitive abilities and extending their lives. If wealthy elites gain biological advantages over the poor—and the poor are pushed out of opportunities to work and gain wealth—it could create a vicious cycle that continually widens the gap between haves and have-nots. Taken to the extreme, bioengineering could eventually turn the rich into a separate species with no need for the underclass of commoners.
Now that we’ve laid out the challenges, let’s explore potential methods that societies can use to address them.
How will humans tackle the massive challenges they face in the 21st century? One option is to band together and tackle them as communities. Facebook CEO Mark Zuckerberg wants to facilitate this by using AI to suggest groups that might be meaningful to individual Facebook users. The goal is to use the social media platform and the algorithmic tools to rebuild communities online in order to improve connections among people throughout the world. However, the project will only work if these online communities also exist offline, because creating a true connection with someone requires you to interact with her as a whole person, which generally calls for face-to-face interaction. In order to achieve this, Facebook may have to adopt strategies that actually encourage users to spend less time online and more time in the real world.
While you can belong to various communities—such as your family, your religion, and your nation—all humans are part of a global civilization, and cultural identities are merely branches of that civilization. In recent generations, as globalization has connected the world economically, socially, and technologically, all of humankind has merged into one global civilization. Although there are differences among groups within the global civilization—such as religious beliefs and national identities—all of the basic and practical matters are largely in agreement. These include:
Despite the existence of a global civilization, in recent years, feelings of disconnection from global economic forces and fears that globalization would disintegrate national systems of education and healthcare have revived a sense of nationalism. However, nationalism can’t offer solutions to the three major challenges that people will face in the 21st century, all of which exist on a global scale and require an international response:
If political models, governments, and scientists have failed to provide answers for how to navigate the immense challenges of the 21st century, could religion hold the answers? In order to explore this, we’ll look at three areas where religion falls short:
Humans now find themselves in a global civilization, facing global problems, while also being divided by nationalism and religion. Amid this division, tensions have grown among people of different nationalities, and they come to a head in the issue of immigration. Immigration requires an understood deal between migrants and host countries—but immigration opponents say that immigrants aren’t holding up their end of the deal, while immigration advocates say that host countries are falling short.
There are three terms of this deal:
Immigration is difficult to resolve because it is nuanced—both sides have legitimate arguments, but the friction lies in deciding where to draw the line. Difficult as it may be, each nation’s ability to reach an agreement on immigration will be a major indicator of its potential to come together with the rest of the global civilization to address the looming challenges of the 21st century.
Even with the right tools, people need to have the right mindset and a clear view of the world in order to overcome modern challenges.
In recent decades, fear of terrorism has gripped the world, ignited wars, and shaped politics—and that’s by design. Terrorism is a strategy for those with little power and few resources to inflict major harm, so, instead of causing physical damage, terrorists aim to incite fear and chaos. Terrorists aggravate their enemy so that it overreacts, and that overreaction causes the destruction that the terrorists don’t have the strength to create. For example, the 9/11 terrorist attack caused mass fear and confusion, which prompted the U.S. government to respond with a show of power by declaring a War on Terror. That war ultimately destabilized the Middle East and created space for the terrorists to seize more power. In order to fight terrorism, governments must remember that terrorists have little power, and they must resist the urge to make a public show of their response.
Furthermore, military warfare is becoming an outdated means of gaining prosperity and geopolitical status. Whereas the most valuable economic assets used to be physical—such as land, gold, and goods—modern wealth is information and technology, which are impossible to capture through war. Today, most successful countries have improved their geopolitical status by improving their economies rather than their militaries. Additionally, with nuclear weapons and cyberwarfare, the potential for serious damage or total annihilation is higher than ever before.
Just as people inflate the perceived threats of terrorism and war, many people overestimate the importance of their own culture and its impact on the world. Children are raised with a misunderstanding of their culture’s importance, as school history lessons emphasize certain events, downplay others, and frame history based on how it affected their ancestors. This self-important view shows a lack of humility and a disregard for history, and it makes people more inclined to act in their own interest than in the interest of the global community.
People often think that their community alone possesses virtues like truth and morality. Religions decree that God dictates laws—such as what to wear, who to love, and what not to eat. While these divine laws have helped to maintain social order in many eras and cultures, they have also been the source of violence and discrimination. In reality, religious laws are unnecessary to keep order because morality is baked into human DNA.
In contrast to religion, secularism achieves social order by adhering to a code of ethics, which includes:
In order to address the challenges of the 21st century, you need to be able to make sense of the world. This is increasingly difficult, as technology and globalization make the world more complex—but the threats of technology, nuclear weapons, and climate change make it more important than ever before to understand the world and help shape its future.
In order to find truth, you must recognize what you know—and what you don’t know. Today, individuals don’t need as wide a breadth of knowledge because they have access to a global network of collective knowledge and others’ expertise. However, that access to knowledge has led to two dangerous phenomena:
People’s difficulty in understanding how the world works also jeopardizes justice, which requires an understanding of cause and effect. For example, although you may think you’re innocently shopping for clothes, others may blame you for perpetuating child labor in sweatshops halfway across the world. While it’s unrealistic for individuals to try to close all their knowledge gaps, the best they can do is to acknowledge their ignorance and act with humility.
In a complex world where individuals struggle to understand the way things work, it’s no surprise that lies have become pervasive. In fact, institutions have long used fictional stories in order to get strangers to cooperate for common causes. For example,
People are often willing to believe something enough to act on it, even though, at their core, they know the story is fiction. However, believing lies can cause harm, so everyone has a responsibility to question and investigate the information they consume, and to keep an eye out for biases they unknowingly hold.
Once you’ve identified the challenges ahead, considered ways to address them, and found a way to make sense of the changing world, you must find your role in it. First, we’ll discuss the practical side of finding your role in society, then we’ll explore how to find deeper meaning in life.
As people prepare for the future, they must face the reality that the modern education system is not fit to prepare children for the 21st century. There are several reasons for this, including:
As people prepare for a new reality and new challenges in the 21st century, they’ll inevitably ponder, “What is the meaning of life?” People have been asking this question throughout history, and they generally want the answer to fit into a story, because humans use stories to make sense out of the world. Two common meaning-of-life stories are:
However, these stories don’t give meaning to your life—instead, you assign meaning to your life and experiences. Religion is only sacred because humans believe it to be. The universe is only mighty and beautiful because humans attach their feelings to it. You don’t need a story to prove that your life is meaningful—it’s meaningful because you give it meaning. At a time when global political, economic, and social systems are changing and the liberal story is becoming irrelevant, each person must reflect on how to make sense of the world.
In order to understand life, you must understand your own mind, because your mind determines how you experience, interpret, and react to the world around you. There are many ways to get in tune with your mind, including art, therapy, physical activity, and meditation, which takes your attention away from the noise and distractions of the external world and focuses it on the reality of your breath and bodily sensations.
When most people begin meditating, they struggle to concentrate for more than a few seconds at a time. When your mind inevitably wanders during your meditation, you learn how little control you actually have over your thoughts—and that realization is the first step in gaining that control. If you don’t begin to learn about your own mind, then algorithms will soon know your thoughts, fears, and desires better than you do.
Despite the huge challenges the world faces in the 21st century, humans have many powerful tools in their collective arsenal. These tools give humankind the power to make things much worse or much better—it all depends upon how we educate ourselves about the issues we face, and how well we can address them as a global civilization.
The 21st century will bring changes and challenges unlike any humans have encountered before. Globalism and technological innovations are changing the structures of societies worldwide—and the changes are happening quickly. 21 Lessons for the 21st Century highlights the biggest challenges in the modern world, and it offers advice on making sense of and navigating such transitional times. In five parts, the book:
This book is intended to inform people who feel too busy and overwhelmed with daily life to reflect on the state of the world and speculate on its future. If people don’t become better informed about the present and participate in shaping the future, society could have a class of obsolete workers whose jobs have been automated, people could lose the ability to make their own decisions, and nuclear weapons could annihilate much of the world.
In our globally connected world, one person’s small actions can impact an entire community on the other side of the world—for example, choosing to buy a particular shirt could be supporting child labor in Uzbekistan. This book will discuss individual and societal behaviors that shape cultures and impact the fate of the planet. Each of the 21 chapters covers a lesson about the challenges humans face and how to address them. There are no easy answers—rather, this book is meant to spur readers to keep exploring and to participate in shaping the future.
(Shortform note: Yuval Noah Harari is an Israeli historian, philosopher, and lecturer at the Department of History, the Hebrew University of Jerusalem. Harari’s previous books include:
Whereas these books focus on the past and the future, 21 Lessons discusses the present moment and the remaining decades of this century.)
Lesson: Liberalism—which values personal freedoms, free trade, and free elections—is the dominant political model today, but technological changes are transforming the political, social, and economic structures on which liberalism stands.
Political models have always served the purpose of providing a story to make sense of the world and an ideal future to work toward. That story depends on the political, economic, and social context at the time. As a result, there have been several times throughout human history when the political model of the day became irrelevant and needed to be replaced.
Feudalism and monarchism were the reigning political models until the Industrial Revolution created such upheaval in economics and politics that those models no longer fit. In response, in the 1900s, elites around the world developed three new political stories, each of which offered a different way of making sense of the world and shaping the future:
World War II struck down the fascist story. By the end of the 1980s, the communist story also unraveled. Through the 1990s and early 2000s, liberalism was the primary story that people used to understand the world around them. Liberalism promoted individual freedoms as the keys to resolving all major societal issues, including poverty, oppression, and violence—through free trade, low taxes, free elections, peaceful international relations, rights for minority groups, and pro-immigration policies. In the U.S., although Democrats are associated with liberalism and Republicans with conservatism, politicians on both sides of the aisle held liberal views, merely in different forms.
However, after the 2008 global financial crisis, people became disillusioned with the liberal view of the world. Liberal desires to protect the free global flow of products, people, and ideas fell out of favor, and nations began pushing back against immigration and trade agreements (we’ll explore this more as we discuss the rise of nationalism in Chapter 7). With the fall of liberalism, people around the world find themselves with no political story to interpret current events and plan for future challenges, such as climate change and technological advancements. Technological disruptions and a changing economic system have also played parts in making liberalism obsolete.
Liberalism was developed to fit the social, political, and economic context of the Industrial Era, but the massive technological innovations that have arisen since the 1990s have made the liberal story irrelevant. For example, the Industrial Revolution created an economy that was dependent upon a mass of unskilled workers. By contrast, now, it’s likely that artificial intelligence (AI) will eventually eliminate many jobs, and cryptocurrencies are dramatically changing the financial system.
Specifically, parallel revolutions in infotech and biotech are transforming societies and economies as well as individuals’ minds and bodies. First, in contrast to the relatively straightforward innovations of the Industrial Era, technological advancements are becoming too complex for most people to understand. As a result, people are uninformed about how machines are changing the labor market and algorithms are influencing the way they think, shop, and vote. Many people feel disconnected and left behind by new technologies, and they worry about becoming irrelevant in a changing labor market.
However, it’s critical to the future of the globe that people understand and participate in the changes happening around them. Even politicians are struggling to grasp the effects of these tech revolutions, despite the fact that they pass laws governing their effects, such as how to manage the collection and use of Big Data.
Second, the biotech and infotech revolutions are allowing humans to alter their own bodies, which could have unknown consequences. Before these revolutions began, humans developed technology to alter the external world—for example, by building a dam to change a river’s course. Because people did not consider the long-term effects, the world is now facing climate change as a result of humans’ massive changes to the planet. Now, technology is being developed to alter humans’ internal worlds, such as slowing or stopping the aging process through bioengineering. If people again fail to consider the long-term consequences, these internal changes could have major effects on individuals’ psychological well-being and the way society functions collectively.
Most of the people who are disillusioned with liberalism today don’t disagree with all liberal ideals—rather, they want to do away with the global aspects of liberalism, and keep only the national ones:
However, the national and global values of liberalism are interconnected. For example, you can’t promote consumer rights to the free market without also supporting international trade. Increasing support for national over global policies led to the election of Donald Trump and the vote for Brexit in 2016. Both political movements tapped into a desire to return to an imagined past golden era (as evidenced by the “Make America Great Again” slogan)—but this approach is unlikely to prepare the world for the completely new territory ahead.
At present, liberalism is better than the alternative political stories that currently exist (such as authoritarianism), but it still doesn’t offer answers for our most pressing modern challenges: climate change and technological disruptions. When people reach the other side of this phase of disillusionment with liberalism, they must either adapt an old political model to modern times or create a new one in order to have a roadmap for navigating present and future challenges.
Lesson: Technological innovation is enabling AI to perform an increasing number of jobs, which will cause massive unemployment.
One of the biggest challenges of the 21st century will be a fundamental change in the labor market caused by technological disruption. In other words, increasingly sophisticated technology could automate so many jobs that unemployment skyrockets.
Throughout history, each new machine and labor-saving technology created at least as many jobs as it eliminated—for example, a piece of equipment that replaced a human laborer also required someone to operate the equipment and another person to do maintenance on it. Past innovations substituted human workers’ physical capabilities, but not their cognitive abilities. No matter how quickly a machine could sew a shirt compared to a seamstress, the machine couldn’t take customers’ measurements.
However, the dual rise of infotech and biotech is creating technologies that could truly replace the need for human workers. New discoveries in neuroscience have revealed that human skills such as analyzing, decision-making, communicating, and interpreting other people’s emotions are the results of specific brain algorithms—not the elusive forces of free will. For example, when a lawyer enters negotiations, she’s not being guided by her intuition, but rather by algorithms that pick up on the other party’s biochemical patterns, including their gestures, tones of voice, and facial expressions.
Now that scientists understand how the human brain uses these algorithms, technologists can replicate those processes with AI. As a result, not only can machines do a human’s job, but they can do it better than humans, because they’re immune to human error and biases. Additionally, machines can be connected on a network and updated all at once. Imagine the implications in the medical field: When new research comes out, it’s nearly impossible to alert every doctor in the world—but a global network of AI doctors could receive that update in an instant. Similarly, a network of self-driving taxis could all be updated whenever traffic laws are updated.
The indirect effects of automating high-skilled jobs would be substantial and far-reaching. For example, if computers can do the job of doctors, the massive savings in payroll could make healthcare more affordable for everyone. Furthermore, patients in rural Uganda could receive the same quality of care as patients on the Upper East Side of Manhattan, because their robot doctors would have access to the same information and resources.
However, even with the improved capabilities of AI, not all professions lend themselves to automation. For example, AI doctors are more likely than AI nurses because doctors’ job duties involve collecting data on patients’ symptoms and analyzing the information in order to give diagnoses—and data collection and analysis are two of computers’ strongest abilities. On the other hand, nurses need a broader range of physical and emotional skills to work with patients.
While it’s tempting to assume that creative professionals like artists and musicians would be similar to nurses in being immune to automation, even the arts aren’t safe from AI. Just as decision-making and interpersonal interactions trace back to neurological processes, so do emotions, which are at the core of creating and consuming art. When you listen to music, your emotional response is reflected in physical reactions, such as a change in heart rate and hormone levels. Machines measuring your biometric data could learn your preferences and create art designed to elicit a specific emotional response.
For example, if you go through a breakup, an AI sound system that has learned which genres, bands, and songs you like could detect your emotions and play the perfect sad and angry songs to fit your mood. If you prefer that your sound system plays music that lifts your mood, you could make one of two adjustments:
Additionally, your AI sound system could:
Since the arts are generally revered as uniquely human endeavors, the potential for AI to automate creativity will force humans to confront the question: Is art about playing on human emotions—which reflect biochemical activity—or is it something deeper?
Although AI has the potential to eliminate many jobs, it can also create or shift work in three ways:
Despite these possibilities, it’s likely that there will still be a net loss in employment, creating a “useless class” of unskilled workers. Some workers will be able to get training in a new set of skills, but technology will continue to change so rapidly that those new skills could also become obsolete a decade later. Even in fields that introduce human-computer teams, the computers may eventually perform so well that they no longer need their human partners. This volatility in the job market presents its own challenges, such as:
Faced with the possibility of looming unemployment, there are a few possible routes that the government can take:
If the world faces the possibility of a post-work economy, governments need to determine ways to support people. One possibility is a universal basic income (UBI), which taxes corporations and billionaires and distributes money to the rest of the population. However, this strategy has a few major complications:
Another possibility is to broaden the definition of work. Parents who take care of their children, people who take care of elderly relatives, and community members who organize their neighbors are all contributing to society but don’t get paid for these services. If the government recognized these tasks as jobs and paid the people who performed them, it would reduce unemployment—but the government would need to levy taxes in order to fund those wages. In practice, this ends up being similar to UBI.
A third possibility is to provide free basic services instead of basic income. This would require the government to cover the cost of education, transportation, healthcare, and other services. This is the ideal in communism. However, this strategy would still require governments to determine what “basic” encompasses.
Twentieth-century workers were vital to the economy but lacked political power, so they fought against exploitation. In this technological revolution, workers may have political power but they are not necessary to the labor market, so they face a fight against irrelevance. There are so many variables—from how technology will develop to how the government and society will adopt new innovations—that there’s no way to know how the labor market will change or how long it will take. However, the stakes are high enough that people can’t afford to bury their heads in the sand about the possibility.
Could widespread automation threaten your job?
In the time you’ve been working in your field, has any part of the industry been automated? If so, describe what it was.
Did the automation cause some people to lose their jobs? Did it create new jobs?
What aspects of your job have computers taken over or made easier?
What future aspects of your job could computers take over?
If computers take over more of your daily tasks, what useful work would that free you to do more of?
Lesson: As algorithms provide increasingly accurate suggestions, their convenience is almost irresistible—but relying on algorithms to make your decisions causes you to lose the freedom and ability to make your own choices.
In addition to threatening jobs, technology threatens human liberty, as algorithms learn so much about people that they gain an immense power to influence and manipulate. This is another way that technology is undermining liberalism, which is all about freedom and personal liberties—to vote, to buy goods in a free market, and to pursue individual dreams and goals with the protection of human rights.
Liberalism maintains that everyone has free will, regardless of education and social status. In practice, people’s choices of free will reflect their feelings more often than their knowledge. For example, between two presidential candidates, voters are more likely to choose the one who gives them a good feeling, even if the other candidate has a more thorough policy plan. Similarly, elected officials often make decisions based on gut feelings and intuition, even when they go against advisors’ recommendations. From the way voters vote to the way leaders lead, democracy hinges on emotion-driven free will—but technological advancements could make it possible to hack people’s emotions, leading to disastrous results.
Before the advent of liberalism, societies were guided by mystical, divine messages from the gods. In the last few centuries, the authority shifted from gods to free will. Although free will feels free, it’s actually a biochemical response honed by evolution and designed to help you survive and thrive. For example, when you see a snake, your reaction to run away is merely an evolutionary response to keep you safe. Similarly, when you feel bad after having an argument with a friend, your desire to make amends is not purely emotional, but rather a function of your biological wiring to cooperate within a community.
This biochemical process meant to promote your safety and well-being—which we call free will— has historically been a perfectly valid method of making decisions and running democracies. However, science has developed technology that can not only replicate that process but also perform it better than you can. As people shift authority from free will to computer algorithms, liberalism becomes increasingly obsolete.
People have already delegated some tasks to algorithms: You let Netflix suggest your next movie, and Google maps tells you when and where to turn. Each decision that algorithms make for you has two effects:
The algorithms won’t be perfect, and they won’t make the best decision every time—but they don’t have to. As long as algorithms make better choices on average than humans do, they’ll still be considered a better alternative. Additionally, if people wear biometric sensors on or inside their bodies, those sensors can monitor heart rate, blood pressure, and other indicators of your preferences, opinions, and emotions. Using this data, the computer can make even more well-informed decisions for you.
The reliance on algorithms can easily snowball to more and bigger decisions, such as where to go to college, which career to pursue, and who to marry. An algorithm that uses your biometric data can learn what makes you laugh, what makes you cringe, and what makes you cry. This algorithm could use that data to find a compatible partner for you to marry, and it would probably make a better choice than you would with your free will, since your decision might be influenced by a past breakup or be otherwise biased in some way.
If computers make all of your big decisions, your life would probably be much smoother without dealing with the stress of decision-making or the consequences of poor choices. But what would that life be like? So much of the drama and action in day-to-day life revolves around decision-making—from deciding whether to take on a project at work to figuring out where to relocate your family. The value humans place on decision-making is reflected in various institutions. For example:
When humans rely on algorithms to make every choice for them—essentially molding the path of their lives—what will humans’ role be, besides providing biometric data to be used in the decision-making process and then carrying out the verdict?
Some of the most difficult and nuanced decisions people have to make are about ethical dilemmas. If they’re programmed to do so, algorithms could even handle ethical decisions—but the capability would come with pros and cons.
On the positive side, algorithms would make the ethical choice every time. The computer wouldn’t be swayed by selfish motives, emotions, or subconscious biases, as humans are. Regardless of how resolute a person may be about ethics, in a stressful or chaotic situation, emotion and primitive instincts kick in and can override philosophical ethics. Additionally, a hiring manager can insist that racial and gender discrimination are wrong—but her subconscious biases may still prevent her from hiring a black female job applicant.
On the negative side, delegating decisions to machines that follow absolute ethics raises the question: Who decides which philosophy is programmed into the software? Imagine a self-driving car cruising down a road when children run into the street in front of it. In a split second, the car’s algorithm determines that there are two choices:
Alternatively, the self-driving car manufacturer could offer two models of the car, each of which follows a different philosophy. If consumers have to choose which model to buy, how many will choose the car that sacrifices them? Although many people might agree that the car should spare the children in a hypothetical situation, few would actually volunteer to sacrifice themselves in order to follow ethics (this brings us back to the point above, that humans often don’t follow ethics in real-life situations).
Another possibility is that the government mandates how the cars are programmed. On one hand, this gives the government the power to pass laws that are guaranteed to be followed to a tee, since the computers won’t deviate from their programming. On the other hand, this practically amounts to totalitarian power, because lawmakers are determining the actions of computers that are entrusted with making decisions for people.
The potential dangers of AI are scary, but some of them are already a reality. Corporations, banks, and other institutions already use algorithms to make decisions, such as which loan applicants to approve or deny. On the positive side, an algorithm can’t racially discriminate against an applicant (unless it’s programmed to do so). On the negative side, the algorithm may discriminate against you based on individual characteristics—it could be something in your DNA, or your social media account. With algorithms in charge, you’re more likely to face discrimination based on who you are, rather than to which group you belong.
This shift brings two consequences:
The example of self-driving cars highlights one of the dangers of AI: The computer does whatever it’s programmed to do, no matter what. In some cases, that characteristic makes computers less dangerous than humans, because the computers won’t succumb to anger or retaliation and break the rules. However, the opposite side of the coin is that computers won’t be influenced by compassion or extenuating circumstances. In other words, robots are as benign or as dangerous as the people who program them—and, in the hands of corrupt, violent, or power-hungry people, robots could bring devastation to humans.
In the 21st century, AI could become widespread in countries run by dictators. Consider the possibilities if this technology is used in:
In the last century, democratic countries were more prosperous than dictatorships because they assigned many people to help process information for decision-making. With such a large volume of information—for example, to make a decision on whether to impose a new tariff—a larger number of people were able to process it and reach a decision more quickly, enabling the country to act promptly and, thus, prosper. Dictatorships, on the other hand, concentrated information and responsibility among a small group, which slowed the processing and decision-making.
By contrast, in the 21st century, AI could give dictatorships a competitive advantage. First, algorithms can process information much more rapidly than humans, which would close the gap that currently gives democracies an advantage over dictatorships. Second, the more information an algorithm processes, the more it learns and the more accurate it becomes—and dictators are likely to collect more information than democracies. For example, a democratic country keeps citizens’ medical records private, while an authoritarian government may collect not only medical records but also DNA scans. That kind of massive database of information would let a dictator know practically everything about her citizens, enabling her to wield immense control over them.
Although AI could develop to the point that programmers could wire it with consciousness—which would essentially give computers a mind of their own, as science fiction thrillers forewarn—the possibility is remote. The larger danger is that humans put so much effort into developing AI that they neglect to develop their own consciousness and ability to discern. If people come to rely on computers for everything—and distrust their own instincts and capabilities in the process—then they become easy victims for manipulation. In fact, this threat has begun to come true in elections all over the world, as social media bots exploit voters’ fears and prejudices in order to influence their political actions.
In order to avoid falling victim to total mind control by AI, humans must devote more time and energy to researching and developing human consciousness. Furthermore, this commitment must be prioritized above immediate economic and political benefits. For example, many people managers expect their employees to respond promptly to emails, even after hours. That expectation causes employees to compulsively check and answer emails, even at the expense of their experiences and sensations—during dinner, they may be so consumed in their email that they don’t even notice the taste and texture of their food. If humans follow this road, they will become cogs in a machine run by robots, and they’ll lose the ability to live up to their potential as individuals.
You already use algorithms every day. Does the convenience outweigh any concerns?
List at least three ways that you use algorithms in your daily life (such as GPS navigation, Google search suggestions, Netflix recommendations, and social media feeds).
Are you concerned that data about you could be collected based on these algorithms? Why or why not?
Would you consider stopping your use of these algorithms in order to keep your data private? Why or why not?
Given the convenience of delegating decisions to algorithms, would you consider allowing AI to make even bigger life decisions, like choosing your college, career path, or spouse? Why or why not?
Lesson: In addition to creating a useless class, technology gives wealthy elites access to health and economic advantages that exacerbate inequality, creating a vicious cycle.
As technology threatens to create a useless class of unskilled workers and algorithms have the potential to overpower free will, inequality could grow exponentially: On one end of the spectrum will be the people who become unemployed when computers automate their jobs, and on the other end will be the wealthy CEOs who own the tech companies that automated those jobs.
Inequality has always existed to varying degrees in human societies:
The rise of technology and globalization brought speculation that equality would increase among people all over the world, but the opposite is happening: A small class of the tech elite possesses a huge portion of the world’s wealth, while billions struggle in poverty. If (or when) AI creates a useless class, the gap will widen further. Additionally, when the masses are no longer critical or even relevant to the economy, the wealthy elite may be less inclined to provide healthcare, education, and other services.
Making matters worse, developments in biotech could enable wealthy elites to become biologically superior by improving their physical and cognitive abilities and extending their lives. Throughout history, socially and economically elite classes owed their status to good fortune, cultural privileges, and hard work. But if wealthy elites gain biological advantages over the poor—and the poor are pushed out of opportunities to work and gain wealth—it could create a vicious cycle that continually widens the gap between haves and have-nots. Taken to the extreme, bioengineering could eventually turn the rich into a separate species with no need for the underclass of commoners.
In order to prevent the rise of a biologically superior species of wealthy tech elites, governments need to regulate who owns data, which is the most valuable asset of the 21st century. In the past, power belonged to those who owned the most land. Then, machines, factories, and corporations became the most valuable forms of capital. In the digital age, data is king.
Tech companies already profit from collecting and selling consumers’ data. The more that corporations learn about the behaviors and preferences of masses of internet users, the more they’ll be able to control those consumers. Soon enough, advertisements could become obsolete—already, when Netflix recommends a movie based on your specific behaviors, you have less need for the movie trailer. As Netflix, Google, Facebook, and other tech companies learn more about you and deliver recommendations tailored specifically to your preferences, why would you need a commercial?
Convenient as it may be to let Google assist your purchasing choices, the consequences of massive data collection could quickly snowball. Currently, people essentially connect to a data gathering network when they use the internet on their computers and smartphones. In time, people could become increasingly connected to a data collection network, in which devices collect biometric information 24 hours a day, starting in infancy. The data could be used to determine your insurance policies, healthcare, or your employment—and if you choose to disconnect, you could risk losing your insurance, healthcare, and job.
The critical question will be: Who owns your data?
Lesson: People’s social lives exist largely online, but people need communities and face-to-face interaction to thrive. In addition to maintaining close-knit communities, people must recognize and participate in the global civilization that exists today.
How will humans tackle the massive challenges they face in the 21st century? One option is to band together and tackle them as communities.
Besides food, water, and shelter, belonging to a community is essential for humans to survive and thrive. Throughout most of human history, people lived in small tribes, typically consisting of a few hundred people. In a community of this size, you can have some form of relationship with everyone, which optimizes group dynamics. However, in recent centuries, small tribes have been replaced with large nations—and some people believe that the loss of community has been a major factor in creating the challenges that modern societies face, from corrupt governments to drug addiction crises.
Facebook CEO Mark Zuckerberg believes so strongly in the restorative power of communities that he has made a mission of connecting Facebook users via virtual groups. Zuckerberg’s project uses AI to suggest groups that might be meaningful to individual Facebook users. The goal is to use the social media platform and the algorithmic tools to rebuild communities where people now gather—online—in order to improve connections among people throughout the world.
Zuckerberg is not the first to try to build communities in order to improve society. Activist groups, religious groups, and others build communities around their shared goals and values. However, the Facebook project is unique in a few ways, including:
Zuckerberg’s goal to connect people will only work if he can bridge the divide between the online world and the offline one. Facebook users can join meaningful communities online and connect with members of those communities through posts and messages—but will their community still exist if the website crashes? In order to truly bring humanity closer together, the communities that begin online need to make the jump to the real world.
Creating a true connection with someone requires you to interact with her as a whole person, which generally calls for face-to-face interaction. When you get to know someone only through her posts and curated photos, you have a limited understanding of her. By contrast, if you meet someone for a cup of coffee, your conversation could wander to topics that aren’t on her Facebook page, like your shared love of baseball. After that conversation, you’re more likely to be open-minded when she expresses a political view you oppose than you would be if you didn’t have a well-rounded understanding of her. In other words, without a physical, real-world relationship, you’re more likely to be polarized and repelled by opposing opinions.
If Facebook truly wants to rebuild human communities—in the real world, not just online—then it may have to adopt strategies that actually encourage users to spend less time online and more time in the real world. Such a policy would inevitably hurt the company’s finances, and shareholders would not be happy with the drop in profits, no matter how noble the cause. This is the reason that corporations historically have not led social revolutions, because company leaders, employees, and shareholders won’t tolerate the sacrifices that the revolution requires.
Aside from keeping people physically distanced from each other, technology also creates a disconnect between you and your own body, sensations, and emotions. When you use a device, only your eyes, ears, and fingers are engaged. If you’re looking at your email while eating lunch, you’re probably not paying attention to the taste, smell, or texture of your food. Furthermore, online interaction has trained people to externalize their experiences: If you are on a scenic hike, your own enjoyment is easily diluted by the impulse to snap a photo, post it on social media, and wait for likes and comments to affirm that you’re having a lovely experience.
In order for Facebook and other tech giants to succeed in bringing people together, they, too, must see internet users as more than just ears, eyes, and fingers. In order to connect people to one another, Zuckerberg has to first appreciate who people are as whole beings. However, if tech companies understand that they’re not connecting with their users’ full selves, then they’ll see the limitations of their reach, and they might be tempted to use algorithms to extend that reach in potentially intrusive ways. Google Glass and Pokemon Go were two attempts to blur the lines between online and offline realities. On a more extreme level, tech giants could use biometric sensors and interfaces that directly connect users' brains to the computer in order to truly bridge people’s physical realities with their virtual worlds—and that could lead to major manipulation.
The goal of bringing humanity together raises the question: What divided humanity in the first place? Throughout history, people have distinguished different cultures and religions as distinct civilizations that clash and compete, as if natural selection were weeding out various sects of humanity. For example, politicians and pundits today talk about the clash between Islamic civilization and Western civilization, claiming that only one or the other can ultimately survive.
In reality, all humans are part of a global civilization, and individual cultural identities and social structures are merely different branches of that civilization. Although a Muslim and a Westerner may behave differently, the differences are cultural, not genetic. People tend to reinforce differences among different cultures by overemphasizing the defining characteristics of each culture, pointing to common themes throughout that culture’s history. However, the most consistent and enduring characteristic of any culture is its ever-changing nature. For example, modern ultra-Orthodox Jews go to great lengths to separate men and women in synagogues in the name of traditions around modesty—and this becomes a point of distinction between them and other cultures—but excavations of ancient synagogues in Israel reveal murals of scantily clad women, suggesting that modesty is not an ancient value in that culture.
Unlike different species—which can never merge—civilizations can come together as one. Throughout human history, distinct tribes have merged into larger nations and civilizations. In recent generations, as globalization has connected the world economically, socially, and technologically, all of humankind has merged into one global civilization.
There are two aspects of this process of global merging:
1) Creating connections that link groups. This can happen between groups that appear to have little in common. For example, during the 20th century, wars played a large role in linking countries around the world. Although war limits or halts trade between battling nations, the deployment of troops quickly spreads cultural practices, ideas, and technologies. War also creates connections by raising the public’s curiosity about the enemy—for example, far more American movies have been made about Russia, Vietnam, and the Middle East than Canada.
2) Establishing uniform behaviors among groups. On the surface, it may not seem like there is uniformity in our global civilization. But, at its core, virtually every modern culture follows the same political paradigm, which values human rights, political representation, and international law. On a more visible level, nearly every country has a rectangular flag comprising simple shapes and colors, as well as an anthem that lasts a few minutes and hits on themes of patriotism and independence. These shared values make it possible for people across the globe to find common ground, and that has been critical to the merging of the modern global civilization.
By contrast, a thousand years ago, people in different regions followed different political models—some were loyal to kings, others followed priests, some were tribal, and others believed themselves to be the only legitimate civilization. Additionally, power and political structures changed constantly, as some societies disintegrated and others conquered and absorbed new territories. As a result, it was nearly impossible for different groups to agree on procedures and laws for their interactions amongst each other.
The commonalities we see among nations in the modern world are not limited to political paradigms. They also include:
Of course, there are differences among groups—from religious beliefs to national identities—but all of the basic and practical matters are largely agreed upon. Furthermore, no civilization is without internal disputes; even within a family or a friend group, disagreements are inevitable. In fact, common conflicts and dilemmas are a defining feature of membership within a group. For example, in 1618, members of the European civilization were in fierce conflict about their vastly different religious views. At the time, that struggle defined European identity, because those outside of that civilization would have little understanding or interest in such a dilemma.
Today, in our global civilization, the biggest challenges of the 21st century will confront people in all corners of the world, including:
Reflect on the communities to which you belong, where they exist, and how they impact you.
List the online communities to which you belong. What is your role in each, and what makes each one meaningful to your life?
Do any of these communities also have an offline presence? How so?
Do you feel the same level of connection with fellow group members in online-only communities as you do in your communities that exist offline? Why or why not?
Do you think that Facebook (or other tech companies) could suggest communities to you that would enrich your life? Why or why not?
Lesson: Many people around the world are finding comfort in the identity and community of their nations and faith—but nationalism and religion create an “us” and “them” mentality, which makes it difficult to come together as a global civilization to address collective problems.
Despite the undeniable existence of a global civilization, many countries are increasingly leaning into nationalism. People have returned to nationalism in recent years in response to modern challenges, but nationalism has deep roots in human society.
For millions of years, humans lived in smaller communities and tribes, but, over time, they merged to take on challenges that were too big for one group to handle. For example, ancient tribes near the Nile River relied on the water to grow their crops, but they constantly had to deal with years of drought and years of flooding. Each tribe had limited manpower and claimed a small section of the river, so, eventually, many tribes banded together to build dams and canals that benefited everyone.
Over time, nation-states formed, and people used culture as a tool to achieve cooperation among a mass of people. The nationalism that resulted has two distinct ingredients:
Mild nationalism enables you to value your country as unique and valuable, and it motivates you to contribute to the well-being of all of your fellow citizens. On the other hand, extreme nationalism leads you to think your country is superior to all other nations, and it easily snowballs to war and violence toward foreigners.
To a point, people tolerate the negatives of nationalism, such as war, because of the benefits of nationalism, such as an education system. However, in the 1960s, the threat of nuclear annihilation caused Americans to step back from the nationalism that drove the country into wars; by the end of the Cold War, many people leaned heavily toward globalization. But, in recent years, feelings of disconnection from global economic forces and fears that globalization would disintegrate national systems of education and healthcare have revived a sense of nationalism.
Proponents of nationalism see immigration, multiculturalism, and globalization as threats to national traditions and identities. They’re in favor of closing borders and slowing the exchange of people, products, money, and knowledge. Instead, nationalists envision a world in which independent nations peacefully trade and coexist without sharing common values, cultures, or laws.
The problem with the nationalist view is that it doesn’t offer any realistic strategy for keeping peace in the world. Each nation will naturally have its own interests (such as expanding its borders to annex desirable land), and some of those interests will inevitably conflict with other nations’ goals. Without shared values, political principles, and international laws, there are no peaceful means of solving these disputes among countries, and they almost certainly lead to war and genocide.
A nationalist in a powerful nation—such as Russia or the U.S.—may assume that her country is strong enough to stand on its own, regardless of whether the rest of the world falls into chaos. However, this view underestimates the need for international trade and global laws governing that trade. Without international trade:
While many people are focusing on the issues and interests of their own nations, the biggest challenges of the 21st century are global, and they require an international response. They are:
1) The nuclear challenge: Once nuclear power became a tool of war, the threat of war meant the possibility of massive destruction. This threat was front-of-mind during the Cold War, so America, Europe, China, and the Soviet Union significantly changed geopolitical dynamics in order to avoid massive killing and nuclear annihilation. As a result, all war has declined—but with Brexit, the United Kingdom threatens this balance by abandoning the European Union and isolating itself in nationalism. The loss of multinational cooperation and the rise in nationalism could lead to war and nuclear devastation.
2) The ecological challenge: In recent decades, humans have accelerated the pace of environmental degradation. As a result, climate change threatens to make many plants and animals extinct, destroy ecological systems, cause more severe weather, hurt agricultural production, and make large areas of the globe uninhabitable. If humans don’t significantly change their behavior soon, environmental decline will reach a tipping point from which there’s no return.
While individual nations can overhaul environmental practices, raise taxes on emissions, and develop eco-friendly technologies, the effort won’t be enough unless the whole world participates. Further complicating the matter is the fact that some countries have more incentive to reform than others: Some nations’ economies rely on exporting fossil fuels, while others would gladly stop importing those fuels if a more affordable alternative existed. Additionally, climate change threatens to flood some nations with rising sea levels, while geological changes could actually benefit other nations. And since the effects of climate change are long-term (though looming increasingly imminently), it’s difficult for some countries to prioritize green policies over the short-term economic pains of reform.
3) The technological challenge: As we discussed in Part I, the simultaneous rise of both infotech and biotech threatens to transform the economy, labor market, social and political power, and even the biological makeup of humans. A nationalist viewpoint simply doesn’t think big enough to take on such a wide-reaching issue. Like climate change, if the United States imposes policies restricting certain technological development, those laws don’t prevent China from pursuing research—and when the U.S. and other countries see China developing new technologies, they’ll be likely to do the same to avoid being left behind in a competitive, nationalistic world. Additionally, whereas people can universally agree that global warming and nuclear destruction are bad, there are a wide range of opinions on the ethics of technological developments in AI and bioengineering, which would lead to different guidelines in each country. In order to avoid a dangerous snowball of technological development, nations need to reach an international agreement on ethical guidelines to guide technological innovation and adoption.
Each of these three challenges individually threatens devastation—and, taken together, each one exacerbates the others, creating a vicious cycle. As climate change increases the frequency of severe weather, leads to food shortages, and puts people out of their homes, technological development is likely to accelerate in a desperate search for solutions. As technological development progresses, increasing tension and competition among nations could raise the likelihood of nuclear war. Growing tensions or an all-out war will make it all but impossible to work together to combat climate change or create guidelines for limiting the development of AI.
Since these three massive challenges facing humans in the 21st century are all global in nature, it makes no sense to address them with nationalist politics. Just as tribes merged into nations in order to tackle issues that were too big for a single tribe, nations now need to merge to tackle these problems that are too big for any individual country to solve. This doesn’t mean that you can’t be patriotic, maintain cultural traditions, and uphold your national identity. However, in doing so, you must also consider the best interest of the global community—because the global nature of these problems means that whatever is in the best interest of the world is also in the best interest of each nation.
You probably already maintain multiple simultaneous loyalties: You’re loyal to your family, your school, your company, your neighborhood, your city, and your country. Add to that a loyalty to your global civilization. Occasional conflicts among your loyalties are inevitable—for example, your city may propose a change that benefits the city as a whole but creates more traffic in your neighborhood. These conflicts are not insurmountable: If municipal, state, and national politics address issues with consideration for the global impact, people can take care of their compatriots by working in the best interest of the entire globe.
If political models, governments, and scientists have failed to provide answers for how to navigate the immense challenges of the 21st century, could religion hold the answers? In order to explore this, we’ll look at three areas where religion falls short:
1) Technical problems, such as how African farmers should deal with droughts caused by climate change. Religion can’t solve technical problems—science can. Humans’ ancestors constantly turned to religion to solve technical problems, such as when priests prayed for rain to nourish farmers’ lands and shamans healed the ill. However, as scientific knowledge has grown, science has gradually replaced religion as the solution to technical problems: Farmers now consult meteorologists and plant drought-resistant crops, and sick people seek treatments from doctors. Religion has remained as merely a supplemental presence in these matters, adding prayers to scientific solutions.
Religious leaders aren’t experts in agriculture, medicine, and other technical fields—they are experts of interpretation, which makes them adept at offering explanations for droughts and illness, rather than solutions. By contrast, scientists hone in on a single subject and use trial and error to find solutions that work. This is why science has not only replaced religion as an authority for technical problems, but also why it has been adopted across different religions and cultures throughout the global civilization.
2) Policy problems, such as how governments should prevent climate change to begin with. As is the case with technical problems, religion doesn’t offer expertise in solving policy problems. From the Bible to the Quran, the wisdom in ancient texts doesn’t apply directly to the context of modern times, at least as far as policy goes. Instead, religious and political leaders typically look for answers from modern sources, and then find a passage from a religious text that can be interpreted to justify the decision. In other words, religion is used to justify solutions, but it does not provide solutions to policy problems. This becomes clear when followers of the same religious text reach two different conclusions on the same issue—for example, American evangelicals cite the Bible in their opposition to environmental regulations, while Pope Francis declares that fighting climate change is a religious duty.
3) Identity problems, such as whether Americans should even worry about the plight of African farmers. Whereas religion is irrelevant to technical and policy problems, it contributes greatly to identity problems—but it serves to divide rather than to unite. Despite overwhelming similarities among different faiths, religions use ceremonies, rites, and rituals to reinforce followers’ connection to a particular religion, which inherently sets them apart from other religions. For example, regular worship is a cornerstone of many faiths, but its distinct forms make the practices look more different than alike: Muslims kneel in prayer five times a day, Jews gather for a meal and prayers on Friday nights, and Christians go to church on Sunday mornings.
Through creating ways for followers to distinguish themselves from others, religions create cultures and mass identities with which followers can align. Mass identities make mass cooperation possible, and mass cooperation is necessary to harness human power to tackle large issues. The same principles easily translate to nationalism. However, whether through nationalism or religion, narrowly defined identities and loyalties work against the global cooperation needed to take on the nuclear, ecological, and technological problems of the 21st century.
Lesson: Nationalist divisions strain debates around immigration, which further exacerbates division—but nations’ abilities to resolve disagreements about immigration will indicate how effectively they’ll be able to address global issues of the 21st century.
Immigration will increasingly be a flashpoint because of growing tensions among people of different nationalities as the global economy, increased international travel, and technology bring people together from across the world. If governments don’t find ways of addressing the fierce debates about immigration, people will be too divided to tackle the global challenges of the 21st century.
Immigration requires an understood deal between migrants and host countries—but immigration opponents say that immigrants aren’t holding up their end of the deal, while immigration advocates say that host countries are falling short. We’ll explore this debate within each of the three terms of this deal:
The first aspect of immigration is the physical entry of migrants into the host country. This basic first step sparks heated debate because it taps into distinctly different beliefs.
Pro-immigrationists argue that:
Anti-immigrationists argue that:
Of course, sometimes countries will say one thing and do another. For example, a country may turn a blind eye to undocumented workers because the economy benefits from their cheap labor, while also refusing to give them legal status. This dynamic ultimately creates an entire class of underpaid, undocumented immigrants who have no political power.
Once immigrants have entered a host country, the nation and its citizens expect the immigrants to assimilate to local norms and values—however, people have a range of opinions about the extent to which immigrants should assimilate. Must they change the way they dress, the way they eat, the way they socialize? If their home country is religious, must they take on a secular view? If their home country is patriarchal, must they take on a feminist view?
There are two main issues in the debate about assimilation:
If immigrants assimilate to their host country, the expectation is that they will be accepted as full, equal members of that country’s society. The disagreement between pro-immigration and anti-immigration arguments lies in the timeline for this acceptance. Immigration advocates view this process on a personal timescale from the immigrants’ perspective: A second-generation immigrant identifies as a citizen of the host country, and she may know little of her grandparents’ home country. Why, then, shouldn’t the host country accept her? Immigration proponents argue that assimilated immigrants should be absorbed into society and treated like first-class citizens within a few decades of their arrival—and if they aren’t treated fairly and they protest for better treatment, it’s the host country’s fault for failing to embrace them.
On the other hand, immigration opponents view this process on a collective timeline from the nation’s perspective, and they argue that host countries need more time to absorb new members into society. Anti-immigrationists say that it takes generations for foreigners to become part of the fabric of the country and to be fully integrated as equal citizens. Historically, civilizations that successfully absorbed foreigners took centuries to do so.
Although generally everyone agrees upon the three terms of the deal of immigration, the conflicts arise in defining those terms:
Immigration opponents think that immigrants are failing to assimilate, which frees the host country from its obligation to treat them as equal citizens, and justifies the host country’s reluctance to accept more immigrants. By contrast, immigration advocates say that immigrants are making the effort to assimilate but that host countries are neglecting their obligation to absorb the immigrants. As long as the two sides are using different definitions, there’s no way to reach a common ground.
Immigration puts the culture of a migrant’s home country side-by-side with the culture of the host country, highlighting the differences between the two. This comparison often leads to biased claims that one of the cultures is superior. Typically, the host country’s culture is considered superior, because, in a sense, the house always wins.
For example, imagine someone migrates from the fictional country of Coldia to the fictional country of Warmland. Culturally, Coldians tend to repress emotional outbursts, avoid conflict, and let issues simmer quietly, while Warmlanders value confrontation and expressions of emotion in order to resolve conflicts and move forward. When the Coldian immigrant applies for a job in Warmland, the hiring manager sees the Coldian as emotionally distant, unfriendly, and cold. The job involves a lot of interaction with employees and clients, and the hiring manager doesn’t think the Coldian would be effective, so she offers the job to a native Warmlander instead of the immigrant.
Similar situations play out in all kinds of contexts. On one hand, the hiring manager may objectively be looking for the best fit for the job. On the other hand, a cycle develops in which immigrants are kept in lower positions because they follow different cultural norms, and that limits their ability to prosper in their new country. These are the effects of culturism.
Although it’s often called racism, culturism is more common today than racism, which is based on old notions of racial superiority based on biology. However, science has debunked such ideas, and now discrimination is based on culture rather than biology—hence, culturism. This shift brings two main changes:
It’s difficult to draw the line between cultural differences and discrimination. In fact, culturism clearly crosses into bigotry in three ways:
Like culturism, immigration is difficult to resolve because it is nuanced—both sides have legitimate arguments, but the friction lies in deciding where to draw the line. Difficult as it may be, each nation’s ability to reach an agreement on immigration will be a major indicator of its potential to come together with the rest of the global civilization to address the looming challenges of the 21st century.
Reflect on your definitions of the terms of immigration, and how well your beliefs align with your country’s policies.
Do you believe countries have an obligation to accept immigrants? Why or why not?
Should an immigrant adopt a host country’s cultural norms even when they conflict with her own? Why or why not?
How long should an immigrant expect to wait before being absorbed into her host country’s society? Why?
How well do your beliefs about immigration line up with your country’s immigration policies and practices?
How do you think your country’s immigration policies could be improved? Why?
Lesson: Don’t waste too much energy worrying about terrorism, because it is a relatively minor threat unless terrorists get nuclear weapons. Similarly, military warfare is an increasingly remote possibility in the modern world.
In recent decades, fear of terrorism has gripped the world, ignited wars, and shaped politics—and that’s by design. With the exception of outliers like 9/11, most acts of terrorism kill very few people; far more people die in traffic accidents or from diabetes. As the name suggests, terrorism is meant primarily to incite terror, but it generally causes little physical damage.
In an attack, most military strategists aim to destroy the enemy’s most powerful weapons and essential resources, in order to handicap any retaliation. However, terrorists don’t have the power to inflict such damage, so their attacks often do little to harm their enemies’ weapons, equipment, and infrastructure. Due to their weakness in resources and manpower, terrorists’ only hope is to aggravate the enemy so much that it overreacts, and that the overreaction creates enough chaos and instability that the power balance tips in the terrorists’ favor. In other words, terrorists instigate their enemies to cause the damage that the terrorists don’t have the strength to create.
A terrorist is like a fly that wants to destroy a china shop. The fly isn’t strong enough to tip anything over, so when a bull enters the china shop, the fly buzzes in its ear until the bull starts thrashing and causing destruction. The 9/11 terrorists were flies that buzzed in America’s ear and caused mass fear and confusion. In an effort to calm the public’s fears, America responded with a show of power and strength by declaring a War on Terror, in which it thrashed through the Middle East. America’s efforts ultimately destabilized the Middle East and created space for the terrorists to seize more power.
Terrorism is only effective because citizens of the United States and other centralized countries are unaccustomed to political violence. Before modern times, political violence was a fact of life in most of the world: Individuals and groups gained political power only through violent force. As the centuries passed, many governments were able to reduce and nearly eradicate political violence, to the point that their citizens came to expect protection from such violence in daily life. This shift had two effects that made terrorism a viable strategy:
Terrorism causes public fear, which pushes the government to prove its power and defend its authority, and that generally amounts to an overreaction—which plays right into the terrorists’ hands. In fact, the safer a country is and the less political violence it experiences, the more vulnerable it is to falling victim to terrorism. Instead of overreacting, governments should understand that terrorists don’t have enough power to do much harm, as long as government officials keep their cool and make prudent decisions.
In order to effectively fight terrorism, nations must respond on three fronts:
Although terrorists are only a minor threat now, they could become exponentially more dangerous if they obtain nuclear weapons, or if they launch cyberterrorism or bioterrorism attacks. If those scenarios happen, the terrorists will not only create a spectacle but also cause serious damage, and the government would need to take stronger action to meet the level of danger.
In the meantime, the government and the public must be careful to distinguish the real current threat of terrorists from the potential future threat of terrorists. America’s War on Terror serves as a cautionary tale of what can happen when a powerful country overreacts: Not only did years of war create mass destruction in the Middle East, it also wasted trillions of dollars that could have gone to more constructive efforts, such as fighting climate change and researching treatments and cures for diseases.
Despite the rise in terrorism, the last few decades have been the most peaceful in the history of humankind. Throughout much of history, war was a necessary means of nations’ growth. National powers violently conquered other territories in pursuit of land, capital, manpower, control of trade, and geopolitical status. However, military wars in which soldiers battle on the ground (or at sea or in the air) in order to raise their nation’s economic prosperity and global power are largely extinct.
Whereas the most valuable economic assets used to be physical—such as land, gold, and goods—modern wealth is information and technology, which are impossible to capture through war. Additionally, many nations are reluctant to instigate war for various reasons, such as:
Despite recent peaceful times and the declining returns of war, international tensions have been building since the 2008 financial crisis. Still, war is not inevitable. The world might be moving past warfare, since the Cold War proved that conflicts can be resolved without fatalities. Today, most successful countries have improved their geopolitical status by improving their economies rather than their militaries.
On the other hand, it’s foolish to underestimate the potential for war. Despite the declining benefits of war, human emotion can always factor into leaders’ decisions. Given how complicated the world is, even seemingly rational decisions can lead to stupid actions. Additionally, anticipating war makes war more likely, because it causes countries to increase their troops, bulk up their arms supplies, and act less cooperatively and more suspiciously in international relations—all of this not only prepares for war, but is also likely to instigate it.
The only successful conquest by a major global power in the 21st century was Russia’s 2014 invasion of Crimea. Part of the reason for Russia’s success was that Ukraine did little to resist, and no other country directly intervened. Russia wisely waged a limited war: It targeted a fairly weak nation and limited its attack to avoid embroiling any other countries. In fact, after its success in Crimea, Russia tried to attack other regions of Ukraine, but it failed in the face of tougher resistance.
Although Russia’s war raised its geopolitical position, it also raised international animosity and distrust of Russia. Additionally, the venture was an economic loss for the country overall. By contrast, during the same period, China greatly increased its economic prosperity without any international conflicts.
For most nations in the world, the prospect of war is unappealing and probably promises more losses than rewards. However, if world leaders follow Russia’s lead—or find another means of waging successful wars in a modern context—the damage could be unprecedented.
Lesson: People have inflated perceptions of their culture’s importance and contributions to the world.
Even when war brings high costs and promises little reward, there are a host of reasons that leaders do it anyway. One major reason is that national leaders—as well as many people in general—overestimate the importance of their own culture and its impact on the world. Greeks, Chinese, and Hindus are just a few of the cultures that claim that history began with their ancestors’ achievements. However, this skewed view shows a lack of humility and a disregard for history. In reality, morality, creativity, art, and spirituality can’t be credited to any single culture because they’re wired into human DNA.
Although nearly every culture perpetuates similarly self-important myths, we’ll break down the flaws in this view by examining the Jewish culture and claims of Jewish achievements.
Children are raised with a misunderstanding of their culture’s importance, as school history lessons emphasize certain events, downplay others, and frame history based on how it affected their ancestors. For instance, when Israeli students learn about the French Revolution, the lesson focuses on Jews’ political and legal status in the French Republic. When this egocentric perspective shapes an entire community’s understanding of human history and the modern world, it’s no wonder that they inflate their ancestors’ contributions to humanity.
In reality, Judaism’s biggest contribution to humanity was indirect: Judaism gave rise to Christianity and Islam, which are two of the most influential religions in human history. Christianity and Islam were at the root of some of the greatest achievements and atrocities in history. However, Judaism deserves only as much credit and blame for Christianity’s and Islam’s contributions as Freud’s mother deserves for his achievements.
Let’s examine some specific claims of Jewish achievement:
Morality: Jews may claim credit for morality, but tens of thousands of years before the advent of Judaism, Stone Age tribes developed their own moral codes. In fact, all social animals—from dolphins to monkeys—have evolved to follow ethical codes that promote group cooperation. Researchers have studied chimpanzee groups in which the alpha male has protected disabled members of the group or adopted orphaned young. The chimps didn’t need the Bible or the Torah to tell them to look after the poor and needy.
Monotheism: There’s evidence that Judaism was not the first or only ancient faith to worship just one god. Furthermore, whichever religion came up with monotheism should be blamed—not credited—because monotheism has been the root of many religious wars and persecutions. If you believe that there is just one god that everyone should worship, you’re more inclined to be intolerant of other people’s gods and rituals. By contrast, if you believe that there are multiple gods, it’s easier to accept that other people celebrate different gods and worship them differently than you worship.
Science: During the 19th and 20th centuries, Jews made significant contributions to science. However, Jewish scientists’ achievements came only after the Jewish Enlightenment and secularization caused many Jews to expand their view beyond strictly religious perspectives. Furthermore, these accomplishments should be credited to the individual scientists rather than to their religion. In other words, the scientific achievements were not those of Jews specifically, but rather of scientists who happened to be Jewish.
You may be spreading your culture’s self-important myths without realizing it.
Reflect on your history education in school as well as current media and pop culture. How is your culture generally characterized?
Do you think that representation is accurate?
Give one or two examples of ways that media and public discourse misrepresent your culture (for example, by glossing over historical events or distorting facts).
What do you think is a more accurate way to characterize your culture?
Lesson: Many religious laws are meant to keep social order, but people are driven to cooperate regardless of religious convictions.
Just as people wrongly credit their cultures for contributing to society and maintaining social order, people mistakenly attribute morality to religion. In reality, humans are hard-wired to maintain social order, and religion has worked both for and against this cause
When people talk about God, they can be referring to one of two versions:
Religious people talk about both gods as one—declaring that He is a mysterious force, but also that He has very clear rules about gay marriage. However, these two views of God are contradictory. If God is an enigma, how could he have also delivered so many specific and minute ordinances about the ways that humans conduct themselves?
Holy books such as the Bible and Quran try to draw the connection between the cosmic God and the lawgiver God, but these texts smack of human interpretation. It’s not logical that God Himself—whichever God that may be—wrote these texts not only to proclaim the principles of a moral life, but also to make decrees that were specific to the time and place when the texts were written. It makes more sense that humans wrote these texts in order to maintain social order and legitimize cultural norms.
While the rules of the lawgiver God may have successfully kept peace and social order in many eras and cultures, they have also been the source of much violence and discrimination. People have committed countless atrocities in the name of God. By contrast, secular laws have achieved the same social order as religious laws, but they have not inspired the same level of self-righteous violence.
Despite what some may say, humans don’t need divine law or the threat of hell in order to act morally. Morality is baked into the DNA of humans and all social animals, as we mentioned in the last chapter. As social animals, humans are motivated to do what’s best for their communities, because relationships play a large role in determining human happiness. Additionally, humans are motivated to be good to people outside of their immediate communities for reasons that are separate from religion, including:
Since belief in God is not necessary for morality, let’s examine the merits of secularism. Religion demands faith and commitment to one God and one set of beliefs, which necessarily closes followers off to other ideas. By contrast, secular people consider wisdom and morality to be common human traits, regardless of faith or background, which makes secular people more accepting of different beliefs and identities. Instead of committing to a set of beliefs, secular people adhere to a core set of ethics, which includes:
Secular people recognize that these values are innate in all humans, so they can appreciate when religions uphold the same values. As such, secular societies embrace religious people as long as the faithful follow secular values above religious codes. For example, although Judaism proclaims that Jews are the chosen people—and, thus, superior to gentiles—Jews are expected to respect everyone’s equality in a secular society. However, secular society does not oblige religious people to forego their beliefs or ceremonies. Secular people’s appreciation for freedom extends to everyone’s right to worship according to her individual beliefs.
With its commitment to the pursuit of truth, compassion, equality, and freedom, secularism sets ideals that are difficult—if not impossible—for people to achieve, and the challenge is even greater for large societies. As a result, many secular movements have morphed into dogmas.
One example is Marxism, which began with Karl Marx’s conviction that people should abandon religion and seek their own truths. Over time, the difficulty of the truth-seeking and the challenges of the war and revolution eroded Marx’s secular vision. When Stalin held power, he transformed Marxism: Stalin declared that it was too difficult for the general public to uncover the truth about the world, and that they should simply trust whatever the Soviet Communist Party said.
While some mutated dogmas are harmful, others have positive effects. An example of this is the doctrine of human rights, which began as a secular view of freedom and equality that eventually became a dogma of humans’ natural right to life and liberty. In reality, people aren’t born with these inalienable rights—but the belief in them has limited the harm of authoritarians and protected billions of people from brutal effects of violence and poverty.
This transformation isn’t limited to secularism—every religion and ideology has some form of distortion, or shadow. For example, Christianity’s teachings of faith and love have been twisted many times to produce holy wars and the Inquisition. Although the shadow doesn’t represent an ideology’s true values, followers must still examine how these mutations happen. Secularism’s commitment to truth and admitting mistakes should make its followers particularly willing to take this hard look.
Reflect on how you distinguish right from wrong.
Is your moral compass guided by religious values, laws, or some other force?
Think of someone you know whose moral compass is guided by a different force. How does her moral behavior differ from yours?
If you didn’t have that influence in your life (for example, you weren’t a member of that religion or you lived under different laws), do you think your behavior would be significantly different? Why or why not?
Lesson: You’ll never be able to understand everything about how the world works—and that’s OK, as long as you recognize your ignorance and don’t overestimate your knowledge.
In order to find truth, you must recognize what you know—and what you don’t know. Modern society has an incredible amount of information at its fingertips, yet, individually, people know far less than their ancestors. In the Stone Age, hunter-gatherers knew how to hunt, make fire, and escape predators. Today, individuals don’t need the same breadth of knowledge because they have access to a global network of collective knowledge and others’ expertise.
Our ability to access collective knowledge has been critical to humankind’s incredible progress and achievements—but it’s also led to two dangerous phenomena:
Corporate and political leaders are even more susceptible to groupthink, because they’re so busy ruling that they don’t have the time to reexamine issues and reach independent conclusions. In order to truly evaluate a belief and to come up with new insights, you need the opportunity to waste time. You must have time to come up with and sift through many new ideas—including many dead-ends—in order to hit on something insightful.
Besides being short on time, leaders also have the burden of power, which warps their perception of truth. First, when you have power, your perspective naturally skews to find ways to use your power—and your justification for using power may not always reflect the truth. Even if you are judicious about wielding your power, those around you will try to sway you to use it for their benefit. Second, leaders typically reach a position of power because they strongly represent the views of the group. The powerful people who surround leaders are invested in maintaining the order based on those existing views, not questioning them and jeopardizing social structures.
The dangers of groupthink and the knowledge illusion will become more severe as the 21st century progresses. Technology, the economy, and global politics will become increasingly complex, individuals’ understanding will continue to shrink, and—as we discussed—the stakes will continue to rise. While it’s unrealistic for individuals to try to close their knowledge gaps, the best they can do is to acknowledge their ignorance and act with humility.
Like morality and ethics, humans’ sense of justice developed during the times of ancient hunter-gatherer societies. Human codes of right and wrong were developed to suit small communities of a few hundred people in a small geographic area—and they don’t translate seamlessly to our modern world of millions of people across the globe.
Justice requires an understanding of cause and effect, which was much simpler in ancient times. In the Stone Age, if you stole your neighbor’s food, the effect of the theft was immediate and apparent: The neighbor and her family would go to sleep hungry that night. By contrast, most of the injustices in the modern world are embedded in systems and structures—such as systems of politics and trade—which makes it almost impossible to know all of the consequences of your seemingly small actions. For example, you may think you’re innocently shopping for groceries and clothes, while others blame you for participating in an inhumane system of food production and perpetuating child labor in sweatshops halfway across the world. It’s almost impossible to know whether or not you’re doing wrong, because the global system in which we live does wrong on your behalf and without your awareness.
Some people argue that individuals’ intentions should be the measure of their morality: If they meant no harm and didn’t know they were doing anything wrong, then you can’t blame them for it. However, many of the greatest atrocities in human history were facilitated by people who were ignorant of their supporting roles. For example, the post office managers in 1930s Germany may have had no intention of aiding genocide, yet their work was vital to the distribution of Nazi propaganda and recruitment.
Even if people make an effort to understand the effects of their actions, the tangled interconnectedness of the modern world is far too complex to fully grasp. Hunter-gatherers developed the ability to comprehend conflicts between individuals, within small communities, and among tribes—but humans’ brains haven’t evolved to grasp justice on a global scale. At the same time, humans’ interconnectedness makes mutual understanding more important than ever before.
When a problem gets too big and complicated to grasp, people often use one of four shortcuts to attempt to understand it:
Since humans often can’t understand and address complicated issues as individuals, people could turn to their communities to work together to make sense of the world—but this approach leads to its own problems. A community can tackle an issue within the community, but when the problems stretch beyond the community, attempts to understand are likely to lead to biased groupthink. In order to address the global problems we face, we need a global community to work together to understand them.
Lesson: Humans are so driven to make sense of the world through stories that they’re often willing to believe lies.
We are living in a post-truth society, in which lies aren’t merely spread in social media posts, but they are also used to justify government actions. For example, when Russia invaded Crimea in 2014, the Russian government repeatedly denied responsibility for the invasion. The government rationalized that its lie served a higher purpose, which was to reunify Ukraine with Russia.
How did we get to this era of post-truth? The phenomenon has a long history. In fact, the great accomplishments of human progress are owed in part to humans’ ability to create and get others to believe fictional stories in order to work together toward a collective goal. In other words, humans’ penchant for stories allows strangers to cooperate for common causes. Storytelling is used in this way by different kinds of institutions, including:
In order for these stories to be effective, they can’t be too far-fetched—otherwise, people will dismiss them. On the other hand, they can’t be too close to the truth, because the truth typically lacks the power to inspire and motivate people. Additionally, effective stories don’t have to entirely pull the wool over people’s eyes: People are often willing to believe something enough to act on it, even though, at their core, they know the story is fiction. For example, money is a human invention, and it has no inherent value beyond the paper and metal it’s made from. Most people understand this if they stop and think about it, but that doesn’t make them any less upset when they lose a $100 bill.
Humans’ willingness to swallow fiction doesn’t erase the truth, and people should still seek the truth—especially when believing and perpetuating the fiction causes harm. Everyone has a responsibility to question and investigate the information she consumes, and to keep an eye out for biases she unknowingly has. You can’t know everything, but you can take two important steps:
(Shortform note: Read Chapter 2 of our summary of Sapiens for more about the role in society of collective fictions, such as money and religion.)
Since stories are a powerful force in informing and influencing masses of people, science fiction plays a major role in shaping people’s understanding of technological, economic, and social developments—regardless of how factual it is. As such, science fiction has a huge responsibility to accurately represent modern technologies and the challenges they present.
One common modern science fiction theme is that of humans becoming manipulated by technology to the point that they are disconnected from reality and their true selves. This plays out similarly in movies like The Matrix and The Truman Show, in which the main characters discover that they’re trapped in artificial worlds, in which their entire realities are programmed. When they finally escape, the “real” worlds they find resemble the artificial worlds they left.
These science fiction stories reflect people’s fear of being trapped in a box—some form of non-reality—and divorced from their real selves, authentic emotions, and genuine relationships. However, the truth is that you already are trapped in such a box: your mind. Your mind shapes how you interpret and experience everything. Furthermore, your mind is trapped in the box of society, which influences the way you interpret your experiences. For example, TV, movies, songs, and art all mold the way you understand, express, and receive love.
People imagine authenticity as freedom from these boxes, but that doesn’t exist. There’s no way out of the boxes, because, at the innermost level, even your identity is an illusion formed by your brain’s wiring. Instead, authenticity exists inside the matrix. If you experience pain, it doesn’t matter if you cut off your finger or if you’re hooked up to wires simulating the feeling of chopping off a finger—pain is pain, and it feels real to you.
Some science fiction acknowledges this reality. One example is the 2015 Pixar Studios animated movie Inside Out, which explores the inner workings of the brain of an 11-year-old girl named Riley. Riley’s mind is managed by a cast of characters that each represent a different emotion, including Joy, Sadness, and Anger. The movie illustrates that no single emotion defines a person—rather, her personality and identity are composed of a variety of emotions and selves.
Another, darker example is the 1931 book Brave New World, which imagines a reality in which a World Government uses social engineering and biotechnology to keep everyone perpetually content. There are no rebellions, no wars, no suffering, and no fear. When a lone detractor named John the Savage calls for the public to escape this controlling system, they have no motivation to do so. The World Government has created a sort of utopia—but John insists that there is something dystopian about its uniformity and sterility.
John the Savage is arrested for trying to incite rebellion, and he has a conversation with the World Controller, Mustapha Mond. John tells Mustapha that the World Government’s excessive control has eliminated truth, beauty, nobility, and heroism in order to make everyone happy; Mustapha replies that nobility and heroism are not needed if there is no war and tragedy. Finally, Mustapha tells John that if he wants the freedom to experience life’s highs and lows, he must claim the right to be unhappy, ill, old, anxious, and miserable. John claims all of these, and he goes off to live in isolation, beyond the control of the World Government. However, John is never able to fully escape the influence of the World Government, because people under their control hear about John and come to see him as an oddity. Eventually, John becomes tired of being a spectacle and hangs himself.
The only way John was able to escape the World Government—the matrix, the box—was to escape his “self,” albeit by suicide. Although this is grim, the concept of shedding a narrow definition of self could be the key to making it through the 21st century.
Lesson: The modern education system is designed to prepare students for a job market that’s becoming obsolete.
The education that children today receive will determine how well-equipped they are not only to navigate but also to shape the future. However, the modern education system is not fit to prepare children for the 21st century.
First, humans don’t know what the world will look like in 50 or 100 years. This has always been true to some extent, but, in the past, people could reasonably predict what kinds of jobs would exist and generally how government and politics would function by the time their children and grandchildren became adults. Now, technology makes it impossible to know which jobs will become obsolete, what the global political system will look like, and whether the human body will have new capabilities as a result of bioengineering. Without having a reasonable expectation of the future, it’s impossible to know how to prepare children for it.
Second, the focus and the goal of the modern education system are outdated. In centuries past, information was scarce. Depending on where you lived, you had access to books, radio, and television—and whatever information you received may have made it through censorship screening by the government. School aimed to arm students with more information, in order to broaden their scope of knowledge and view of the world. By contrast, now people face an information overload, and instead of trying to censor information, some governments add to the noise by distributing false and misleading information to confuse people. As a result, schools no longer need to pile more information on students. Instead, they must teach students to make sense of the vast amount of information they take in, to distinguish between trustworthy and questionable sources, and to weave the pieces of reliable information into a comprehensive view of the world.
Third, schools currently put too much emphasis on teaching students skills, such as coding and solving math equations. In the past, such skills prepared students for future jobs. However, at a time when the future job market is a mystery, this model is likely to waste students’ and teachers’ time on tasks that will ultimately be performed by robots. Instead, experts suggest teaching students “the four Cs”—communication, collaboration, critical thinking, and creativity. Additionally, schools should teach fewer technical skills and more life skills, such as how to learn new things, cope with change, and maintain mental balance amid instability.
The only thing that’s certain about the rest of the 21st century is that it will be filled with constant change and uncertainty. Historically, the first part of a person’s life was a period of learning and building an identity, and the rest of her life was spent working and fine-tuning her identity. As the 21st century progresses, this clean division will be replaced by ongoing learning and adapting. As we discussed in Chapter 2, future workers should be prepared to switch careers every decade or so, as their previous professions become automated and obsolete. This way of life will be immensely stressful, so children will be far better served by learning to maintain mental stability amid constant change than they will be by memorizing the Pythagorean theorem.
It’s clear that the existing educational model is not effective for 21st-century needs—but, so far, there is no viable model to replace it. A new educational system needs to be designed to fit an uncertain future, and it needs to be scalable so that it can be as effective in rural Ecuador as it is in wealthy suburban California.
The uncertainty of the future weighs most heavily on today’s children and teens, who face a conundrum:
If children and teens today want to determine their own destinies, they must get to know themselves by also understanding all the external forces that influence them. Getting to know yourself has always been an ingredient in a rich life, but in the 21st century, the stakes are higher. With the rise of technology, you’re actually in a race to get to know yourself faster than the algorithms can get to know you—and the winner gets to decide how you behave, what you buy, and who you vote for.
Reflect on whether your education gave you the tools to succeed as an adult.
Besides fundamental skills like reading and writing, how often do you use the knowledge you learned in grade school (K-12)?
Are there any subjects that you haven’t used since high school or earlier? If so, which subject(s)?
If you had the opportunity to replace that subject with something else, what topic would have served you better as an adult?
If you attended college, how often do you use the knowledge you learned there?
How would you alter your college education to better prepare you for adulthood?
Lesson: People are driven to uncover their reason for living, and they often look for answers in meaning-of-life stories or within themselves.
As people prepare for a new reality and new challenges in the 21st century, they’ll inevitably ponder, “What is the meaning of life?” When people ask this question, what they’re really asking is, “What is the meaning of my life? What is my role in the universe?” Humans have been asking these questions throughout history. More often than not, people want the answer to fit into a story, because humans love stories, and they use stories to make sense out of the world.
Throughout human history, people have come up with countless stories to explain the meaning of life, including:
Each of these explanations has logical holes in it. For example, if you believe that you are part of an eternal circle of life, have you taken into account the fact that eternity will stretch far past human existence? What will the meaning of life be when there are no people around? Or, if you’re a Zionist and your meaning-of-life story begins with Judaism and the Jewish people, was there no meaning to life during the nearly 2 million years of human existence prior? Ultimately, it doesn’t matter to most people if their version of the meaning of life is incomplete—the story just needs to have two features:
Many people believe that, as long as they leave behind some kind of legacy, their life will have meaning. Legacies can be:
This explanation gives people the comfort of believing that, as long as something exists to keep their memories alive, their lives will have been worth remembering. However, there are holes in this logic, as well. First, cultural legacies are easily lost, destroyed, or made irrelevant as time goes on. Second, there’s no guarantee that a person’s family line will live on for centuries, let alone eternity—and if they do, there’s no way to know how or whether future descendants will make their lives meaningful. Third, acts of kindness don’t have inherent meaning; if a person’s purpose is to help others, what is the purpose of those other people’s lives?
Some people don’t try to find meaning that will last beyond their lives. Instead, they believe that their purpose is to find true love. While this may not be meaningful to anyone else in the world, it feels meaningful to the people involved—or, at the very least, being in love distracts people from worrying about the meaning of life.
People believe in these fictional meaning-of-life stories in order to forge their identities. People typically don’t put their full faith in any single meaning-of-life story—instead, they often believe pieces of different stories, thus creating multiple identities.
An individual’s various identities often contradict, but humans are so good at compartmentalizing their identities that they generally don’t even notice or acknowledge the contradictions. For example, a conservative activist who’s also a devout Christian is likely to vote against welfare programs and in favor of maintaining gun rights—however, these stances go against the teachings that Christians should help the poor and opt for nonviolence.
If most people really examined their meaning-of-life stories, the stories would fall apart. But that doesn’t matter to most people, because:
If these factors explain why humans are compelled to believe these stories, then how, exactly, do people build their faith in them? The answer is ceremonies, rites, and rituals that turn the mundane into something mystic, which gives the sense of making a fictional story real. Rites and rituals have a place in most institutions, including:
Rites and rituals don’t necessarily bring you closer to finding truth—in fact, they do the opposite by building your belief in a fiction. However, rituals create social stability and harmony among the people who participate in them together.
Sacrifice is the most powerful ritual of all, because it involves a form of suffering, which is undeniable. In other words, why would you sacrifice—or suffer—if the reason for your sacrifice weren’t real? In reality, people sacrifice for fictional reasons all the time, but the false logic is enough to convince believers that their sacrifices prove truth. Most dogmas and belief systems create ideals that are nearly impossible for people to reach, so sacrifices offer a way for people to prove their faith when they inevitably fall short of their high ideals.
Sacrifice can involve fasting during Ramadan or foregoing meat on Fridays during Lent, both of which are meant to make followers lean on God in their suffering, thereby increasing their faith. Once you’ve made a difficult sacrifice, you commit to the purpose even more, if for no other reason than to prove that your suffering wasn’t in vain. This phenomenon is even common in shopping. For example, if you buy an expensive car, you’re likely to gush over it and downplay its shortcomings because you don’t want to believe that you wasted your money.
There are two forms of sacrifice:
Modern culture places a premium on people’s freedom to maintain multiple identities and subscribe to various meaning-of-life stories. However, liberalism (although it’s going out of style, as we discussed) says that if you evaluate all the different stories, you’ll eventually realize that none of them explains the meaning of life. These stories don’t give meaning to your life—instead, you assign meaning to your life and experiences. Religion is only sacred because humans believe it to be. The universe is only mighty and beautiful because humans attach their feelings to it. You don’t need a story to prove that your life is meaningful—it’s meaningful because you give it meaning.
From that standpoint, the liberal story promotes two causes:
The freedom to create your identity and assign meaning to your life brings into question the free will that drives your creativity and your fight for freedom. As discussed earlier, science shows that your free will is actually the expression of biochemical signals bouncing between neurons and the regurgitation of social and cultural influences. While it’s true that you have the free will to do what you want, you don’t have the free will to decide what you want in the first place. For example, a gay man has the free will to be with another man, but not to choose to be attracted to women.
Some people assume that if you discard the notion of a free will, you’ll sit idly as life happens to you. However, the opposite is true: Once you realize that your thoughts, feelings, and desires don’t define you—they simply reflect your biochemical activity and external influences—then you have the opportunity to explore what does define you. This is a difficult process that forces you to question your internal narrator, because you now know that the narrator is being hacked by advertising and brain signals.
While liberalism tells people that they are responsible for giving meaning to life, Buddhists declare that there is no meaning. Life simply is what it is, and there’s no purpose in assigning meaning to it.
Buddhism says that three things are true of life:
According to Buddhism, human suffering is the result of unrealistic expectations. People hope for things to stay the same, or to last forever, or to completely satisfy them—and, inevitably, those hopes fall flat. By contrast, if people can let go of their attachments to things and their insistence on assigning meaning to everything, they can finally feel peace. As we’ll talk about in the next chapter, meditation reinforces the principle that there is no deeper meaning to life by focusing on simple physical actions and sensations—your exhalation has no deeper meaning than releasing air from your body.
Reflect on how you search for the meaning of life.
If you’ve contemplated the meaning of life, where do you look for answers?
What made you seek an answer in that particular source?
Have you found a satisfactory answer? If so, what is it?
Has this chapter made you reconsider your search and your answer? Why or why not?
Lesson: You must understand your mind in order to make sense of the world, and meditation is one of the best ways to do that.
In order to understand life, you must understand your own mind, because your mind determines how you experience, interpret, and react to the world around you. When you understand your mind—including its biases, fears, and complexes—then you can choose your actions more wisely and execute them more effectively. There are many ways to get in tune with your mind, including art, therapy, and physical activity. The author’s method of choice is meditation, which takes your attention away from the noise and distractions of the external world and focuses it on the reality of your breath and bodily sensations. Observing each inhale and exhale keeps your attention on the present reality, which offers a clearer view of life than any story or dogma can.
The better you know your mind, the more you’ll realize that your thoughts and emotions are not as straightforward as you probably think. As we’ve discussed, your thoughts are a reflection of your brain wiring and your external influences. Furthermore, your emotions are a reflection of your physical sensations: External events trigger a physical reaction, and your emotions reflect that physical sensation. For example, if you read a politician’s offensive tweet, the hot feeling in your stomach is your first reaction, and the anger you feel is a reaction to that sensation. Additionally, when your mind inevitably wanders during your meditation, you learn how little control you actually have over your thoughts—and that realization is the first step in gaining that control.
While scientists know much about the brain—and ongoing research with new technologies are constantly adding to that database—they know little about the mind. Whereas the brain is a physical organ with a network of neurons, biochemicals, and synapses, the mind is the source of feelings and subjective experiences, including love, pleasure, pain, and anger. Experts assume that the experiences of the mind are products of the brain’s activity, but there is no evidence because the mind can’t be studied in a brain scanner.
Currently, there are only two ways to study the mind:
In order to take useful notes of the workings of their own minds, scientists need methodical strategies. There are few modern methods for self-observation of the mind, so researchers could turn to meditation techniques, which were developed to help people observe their minds and bodies methodically and objectively.
Increasingly, scientists are studying the brains of experienced meditators—but this approach only goes so far. When researchers scan a meditator’s brain, they gain information about how meditation affects the brain, which is valuable, but they don’t gain any insight about the mind. In order to truly observe the mind, scientists would need to practice meditation themselves. This approach requires time and dedication, because when most people begin meditating, they struggle to concentrate for more than a few seconds at a time. Scientists would have to train for months or years, as astronauts do, to finally be able to collect the data they need.
As for the rest of us, we’d be well advised not to wait for science to take on this task. If you don’t begin to learn about your own mind—through meditation or some other means—then algorithms will soon know your thoughts, fears, and desires better than you do.
In a Q&A, the author shares some final thoughts about how to approach the 21st century’s biggest challenges. The key takeaways include:
Despite the huge challenges the world faces in the 21st century, humans have many powerful tools in their collective arsenal. These tools give humankind the power to make things much worse or much better—it all depends upon how we educate ourselves about the issues we face, and how well we can address them as a global civilization.