Technology Will Change The World

News Room
30 Min Read

The only constant in life is change – and the US is a country with change built into its DNA.

As technology expands and reshapes our economy, it’s time for our social contract to evolve with it. Artificial Intelligence is the latest platform shift that promises to radically alter our prospects for growth and prosperity. It’s time to think more critically about how we could implement universal basic income – the next layer of the social safety net – as a result.

In Silicon Valley, it’s uncontroversial to say that technological innovation is the most powerful driver of prosperity and welfare in history. If innovation in artificial intelligence will lead to tremendous value creation, how can we ensure that this value creation becomes a socially-inclusive driver of prosperity as a result?

What kind of policies will provide the next step up in basic human rights and public services that accompanies AI?

Now is the time for all of us in the US – entrepreneurs, policymakers, technologists, and (yes) even technology skeptics – to come together and define the policies that will enable the greatest possible inclusive value creation from the next chapter in technology.

Silicon Valley’s moment in history

There are many reasons that Silicon Valley became the epicenter of tech innovation over the last half a century. Arguably some of the most important reasons are its culture of excitement for progress, dissatisfaction with the status quo, and the rugged individualism to not just hope for change but to drive it.

Another driver is the pursuit of wealth. Technology throughout history has created a boon for global living standards, and the builders of technology have benefited from being on the development side of that boon. One of the outcomes which is most apparent around Silicon Valley is that quick fortunes have led to significant inequality in the region, because the prize is great for those companies and products that make a great impact.

As technology development reshapes the world at an increasing rate, how can we create the conditions necessary for everyone to thrive as a result, and in so doing, pave the way for more innovation?

The world is near an inflection point in artificial intelligence, arguably one of the most important production shifts since the first industrial revolution. How can the US leverage this shift to drive a global step-up in public services and our basic standard of living?

On the edge of the Universal Basic Income zeitgeist, the time to start thinking critically about these questions is now.

When it comes to predicting the future, history can be a surprisingly reliable guide. Past step-function changes in technology have eventually always led to improvements in social safety nets, and AI could well be the same. Let’s look at a few of those major platform shifts to better understand how AI could get us to UBI:

  1. The first two industrial revolutions,
  2. Electrification,
  3. Personal computers,
  4. Offshoring,
  5. And the internet

1750 to 1914: The (First) Industrial Revolutions

It’s uncontroversial to say that tech has been an increasingly important force reshaping the world economy since the first industrial revolution:

Those inflection points you see in population, production, and wealth correspond to the first industrial revolution in Great Britain in the 1700s and the second in the US in the 1800s. The first two industrial revolutions revolved around the transition from an agrarian to a manufacturing economy, the use of increasingly more efficient sources of energy (coal and steam power, followed by electricity), and the reorganization of mass production via factories.

The increased capacity of industrial printing presses also led to the dissemination of knowledge on a scale never seen before. This is in no way an exhaustive list of factors, but broadly speaking, leaps in technology and energy were the primary drivers of a shift that changed the world forever. Trains moved heavy cargo across continents, cars went from curiosity to consumer staple, basic goods weren’t reliant on artisans for their production…

1880s to 1950: Electrification

Another massive leap came with the mass adoption of electrification, which began in factories and spread to households. Suddenly, the ability to read and do productive work at night, coupled with the wave of electricity-dependent technologies that followed it (refrigerators, telephones, radios, etc.) disseminated efficiency benefits from factories to households globally. (A good read here is Empires of Light: Edison, Tesla, Westinghouse, and the Race to Electrify the World.)

These benefits led to huge corresponding leaps in average standard of living.

1970s to today: Personal Computers

One of the next major leaps of corporate technology into the household was the rise of personal computers beginning in the 1970s, led by Apple, IBM, and Microsoft among others. The same way industrial manufacturing reshaped the world of atoms to produce massive consumer surplus, industrial computers gave companies and researchers the ability to perform complex calculations, more efficiently store and transmit information, and run sophisticated ‘programs’ for specific purposes. (A great book on the history of computers, which has become a Silicon Valley shibboleth, is Walter Isaacson’s The Innovators.)

In the late 2000s, computers became even more personal: smartphones replaced feature phones as the dominant mobile technology, and computers moved from the household as their core unit to the individual. Computers became smaller and more mobile, and the programs running on those computers adjusted for their new purpose, giving consumers an incredible array of new abilities. Imagine being lost in a city and trying to find your way home – or translating something to a different language, or pulling a quick reference, or communicating instantly to the rest of the world – without smartphones.

Like electricity, smartphones disseminated quickly not just in developed countries but also to emerging markets.

1989 to 2008: Globalization 3.0 (The Global Supply Chain)

While personal computers were growing in ubiquity, another important shift was taking place globally – and this one arguably had as much to do with international politics as with technology.

Globalization refers to the impact of trade and technology in making the world a more connected and interdependent place. With the fall of the Soviet Union and waning of communism in the late 1980s and early 1990s, a third of the world economy opened up to trade for the first time. As communication and transportation improved between the first and second world, as well as quickly growing ‘third world’ economies like China and India, more and more manufacturing that had been previously contained to Europe and the US moved offshore.

This is what the World Economic Forum refers to as ‘Globalization 3.0.’ The first wave of globalization in their model was the creation of global textile and industrial goods supply chains in the UK in the 1800s, followed by the second wave of global factories and industrialization in the second half of the twentieth century. The current wave (4.0) is the globalization of digital goods and services.

In his book The World is Flat, New York Times columnist Thomas L. Friedman outlines how offshoring leveled the playing field for the international economy, creating two massive primary benefits: increased economic output and standards of living in the east (the Chinese ‘economic miracle’), and lower consumer prices (and increased purchasing power) in the west. Globalization is so all-pervasive today that Pacific salmon caught in Seattle and Alaska are shipped to China to be filleted before being shipped back to be served in US restaurants.

Globalization did not end with manufacturing jobs moving from west to east: the service economy – including sectors like call centers, tech repair, and research – has continued to move offshore to lower-cost centers, as communication and transportation have become more efficient. Former agrarian and manufacturing hubs are increasingly deriving their GDP from services.

Late 1990s to Today: The Internet

Lastly, the advent of the internet and its mass consumer adoption from the late 1990s to today has created an interconnected world where information is instantly accessible and disseminated, economic collaboration is possible across vast distances, and more people by default participate in the global economy than only in a regional one.

This is in no way an exhaustive list of the factors that led to radical changes in economic production over the last 300 years – pasteurization, increases in the production and preservation of food, shifting workforce labor participation, antibiotics, condensed travel times and costs…

Innovation – ie. the continued agglutination of technological breakthroughs building on each other – continues to be the most powerful driver of prosperity and welfare the world has ever seen, which is why economists place so much emphasis on technology growth and efficiency in measuring total factor productivity, a metric to measure the total possible outputs from a given set of inputs.

It won’t be any surprise to people working in the tech sector that the profit motive and pursuit of wealth have been incredibly powerful drivers of innovation: it’s the potential return on equity that keeps startup employees working nights and weekends and ruthlessly optimizing their products to outsell the competition. And this pursuit leads to the development of technologies like cell phones, computers, and internet-enabled services, each of which create a massive public surplus.

A higher ceiling empowers us to build a higher floor

Major economic shifts throughout history such as the industrial revolution and mass electrification led to corresponding shifts in the basic rights and protections afforded to people in those societies as a result.

Three illustrative cases are (1) public education, (2) workers’ rights and protections, and (3) universal healthcare.

The advent of the printing press, miniaturization of books (from large religious tomes to handheld books and pamphlets), and mass-production of reading materials meant that many Europeans could afford books for the first time. Electrification meant that people could easily read through the night after a full day of work. Literacy rates increased dramatically as a result, and education rates increased with them. Suddenly, regular people could read the arguments of politicians and philosophers just as easily as the nobles and clergy they had previously relied on to read those arguments for them.

This was one of the primary catalysts of The Enlightenment in Europe in the late 1700s and 1800s.

Higher literacy and education rates led to a more informed populace, who in-turn demanded from their governments the right to vote, fair taxation, freedom from persecution, basic civil liberties, and “life, liberty, and the pursuit of happiness.” (The American and French revolutions were incidental outcomes of the Enlightenment).

Similarly, the increased presence of factory workers in the labor force led to the creation of workers’ unions, the (now conventionally-accepted) five day work week, eight hour days, paid vacation, OSHA compensation for workplace injuries, etc…

Each major platform shift and step up in economic production eventually facilitated a step up in basic human freedoms and public services.

(Of course, as civic rights like voting become more inclusive, politicians started to fund programs that would secure them elections. So there is a natural political effect in which expansions in civic rights lead to expansions in public benefits.)

Centuries ago, general education was a right reserved in the west to nobility and the families of rich merchants. It would be unthinkable to tell a medieval Burgher that universal education – or even literacy – was a basic human right. They would immediately write off the difficulty of providing every child with an education, and warn about the dangers of an over-educated population, who would be less likely to put up with the repressions and privations of the past.

Decades ago, the concept of universal healthcare was also unthinkable. It would have been anathema to most countries and policymakers in the pre-Bretton Woods world to suggest that countries should publicly guarantee to subsidize or cover the costs of basic healthcare for all their citizens. For all of history, access to healthcare was considered to be a privilege of those could afford it.

And yet today, an incredible number of countries cover the costs of basic healthcare services for their citizens.

Definitions of universal healthcare vary, but over 50 countries provide free healthcare universally to all of their citizens, and many provide free (but not universal) or universal (but not free) care.

None of these changes – public education, defined workweeks and worker protections, universal healthcare – would have been possible without the massive economic growth and productivity gains that resulted from technological platform shifts.

The most recent economic shifts (Globalization 3.0 and the spread of the internet) have connected the world in a way that allowed many of these public services and protections to expand from just being offered in ‘developed’ markets to many emerging economies as well.

2023 to ?: Artificial Intelligence and the Next Platform Shift

We are hitting an early inflection point in the development of artificial intelligence, catalyzed by the release of programs like OpenAI’s ChatGPT and the long list of DALL-E image generators. It now seems inevitable that artificial intelligence will come to define the next phase of the economy. The question is no longer ‘if’ but ‘when’ most products will leverage AI to create efficiency gains, enable production, and lower costs on a level we haven’t seen before.

As tech writer Packy McCormick wrote in the AppetiZIRP, “Over the past couple of centuries, the proportion of Americans working to meet the population’s basic needs has declined dramatically thanks to scientific and technological innovation, automation, and globalization. So we invented new jobs […] to meet brand new needs higher up Maslow’s Hierarchy and to keep people busy and filled with meaning. Many of those jobs are now on the cusp of being automated away too. The most incredible stat I’ve come across on this front comes from Vaclav Smil’s How the World Really Works: “the share of the US population working as farmers declined from 83% in 1800 to 1% today.” The output is that it takes fewer people to feed more and more people. And the fewer people working on feeding us, the more people to work on satisfying other needs, for others or for themselves.”

Like so many of the technological leaps before it, as AI restructures the economy, it will lead to short-term job dislocation. And my point here is not to minimize this effect: certain jobs will be rendered inefficient or redundant by AI, and makes it imperative that we invest in public-private cooperation in the US to soften the blow of this dislocation.

Just as automated looms put weavers out of business and catalyzed the Luddites to fight against textile factories, and just as cheaper supply chains and auto-assembly machines put blue-collar car factory employees out of business in the US, so AI will make some jobs economic history. More formulaic jobs, like paralegal work and customer service, will likely be some of the first to be radically trimmed by competing AI product offerings. But the advent of ChatGPT and Stable Diffusion have also taught us that much of what we consider to be ‘creative work’ is also surprisingly formulaic and replicable, so jobs in creative fields are just as vulnerable.

This doesn’t mean that the world is hurtling into a period of wide-scale unemployment (as has always been predicted in the past). It means that the nature of human jobs will change, and people will instead work in new jobs augmented by AI rather than competing directly against it. Paralegals will be able to increase the rate of contracts they help to draft and interpret, and can focus on higher-skill tasks. Customer services reps will be able to increase the number of tickets they close out in a given day. Artists will paint with the support of AI tools (and surely there will be some bespoke demand for old-fashioned non-AI art). Humans are bad at predicting the future, but good at extrapolating – and extrapolating from past platform shifts, we know that new jobs and abilities will be created to replace old ones, but we just don’t know which yet.

This type of shift highlights the creative destruction of technology leaps: each new technology creates massive efficiencies, but also dislocates past economic participants. Amazon marketplace has been a great catalyst for book sales, but it has put many bookstores out of business.

Even though job dislocations are smoothed-out in the long-run, in the short run they can be massively disruptive. New job types are not created overnight, and in the immediate aftermath of new AI tools hitting the market, some people will find themselves competing directly against AI.

As Anton Korinek, Martin Schindler, and Joseph Stiglitz argue in Technological Progress, Artificial Intelligence, and Inclusive Growth, AI “may, like the Industrial Revolution, represent a critical turning point in history. Increasing automation in manufacturing may lead to increases in wage inequality, declining labor demand, and increased skill premia in most countries […] Economic analysis, based on models appropriate to this new era, has the potential to help in the development of policies—both at the global and national level—that can mitigate these adverse effects, to ensure that this new era of innovation will lead to increased standards of living for all, including the billions living in developing countries.”

So what kind of policies will provide the next step up in basic human rights and public services that accompanies AI?

The Next Floor: Universal Basic Income

Pundits throughout history have predicted that improvements in technology would effectively bring about ‘the end of work,’ the idea being that machines would do all the work for us and humans could live lives of leisure (h/t John Maynard Keynes). We now know that the reality is that technology doesn’t replace work; it allows us to work more efficiently and effectively, undertaking projects at a scale that the Pharaonic builders of the pyramids could never have imagined 4,500 years ago.

It will be the same with artificial intelligence: AI will replace many jobs and it will also create many new ones, to the net effect that humans are able to coordinate and build on a previously unimaginable scale, with a resulting increase in prosperity.

So it’s only natural to ask how our public services and rights will improve as a result.

The most likely next floor seems to be a universal right to cover the basic costs of living to afford necessities like food and shelter – not just for unemployed people or those living below the poverty line, but for everyone. Just as previous innovations paved the way for education and healthcare as fundamental human rights, this is the right time to implement a universal basic income.

Universal basic income is ordinarily defined along the lines of, “a recurring cash payment provided by the government to its residents on a monthly or yearly basis. However, unlike other government welfare schemes, UBI is unconditional—meaning you get the money with virtually no strings attached—including no restrictions on how much money you earn or your relationship or parental status. Simply put, it’s extra money to help you cover your bills, rent, childcare, expenses—whatever you need.”

What’s important about UBI is that the payment is made to everyone, not just to traditional welfare recipients. This is important because, in the US, hourly compensation and wages have stagnated over the past 50 years since 1973, while the prices of many essential goods and services like education and healthcare continue to rise. (For more background on Cost Disease, this is a great primer.)

The stagnation in wage growth during an unprecedented period of technological innovation is not a coincidence: as MIT economist Daron Acemoglu notes, “At least half the rising gap in wages among American workers in the last 40 years comes from the automation of tasks once done by people.” The knee-jerk response to that statistic might be to suggest that we need to automate less or to prevent automation from replacing jobs. But as we’ve discussed, less technology is certainly not the answer – technological change, including automation, has been an incredible historical catalyst for improvements in standard of living.

The answer is, instead, to ensure that these improvements in quality of life are evenly distributed. An easy place to start is ensuring that nobody lives below the poverty line. This is why it is so important that UBI focuses on providing an unconditional cash transfer for everyone, not just for people who meet a needs-based set of criteria. The goal should never be equality of outcomes – it should be equality of opportunity.

Pseudonymous Twitter personality and economic commentator James Medlock illustrates this well in his interview with Noah Smith, “I think the idea should be to build a comprehensive set of universal systems that provide a foundation for people’s lives, and then if there are a few programs like Housing First where means-testing makes sense, that’s all well and good. Social democratic countries like Sweden still have some means-tested last ditch assistance programs, but they only spend .3% of GDP on them compared to our 2.5% of GDP, because the universal programs on top of them prevent people from needing them in the first place. I think the problem is when people see poor relief as the be-all-end-all point of the welfare state, when a good welfare state does so much more than that.”

The US’ data echo this finding – much of the costs of welfare are administrative, ie. means-based testing, tracking, and monitoring. The universality of universal basic income could, counter-intuitively, erode one of the primary drivers of the cost of welfare programs.

UBI by definition is an unconditional cash transfer made available to everyone, calibrated specifically to support the costs of the basic goods that people need in order to be socially functional.

For political reasons, needs-based cash transfers from wealthy taxpayers to poorer ones (which effectively comprise the ‘welfare state’) have always been contentious (not least of which because of their impacts on the morale and desire to work of recipients). It is a shame that the same pursuit of wealth that catalyzes innovation is also expressed through activities like lobbying for tax loopholes, or banning the development of low-income housing. The same profit-seeking behavior that creates positive externalities through the development of technology erodes some of those externalities by fighting against redistribution. We should work as a tech community to channel the positive-sum initiatives of the pursuit of wealth while overcoming the urge to engage in the zero-sum behavior.

This is why no less than OpenAI’s Sam Altman is one of the specific proponents of harnessing the outputs and benefits of AI to create the conditions for universal basic income. (As well as increasing the minimum wage.)

Luckily, UBI programs are no longer theoretical: many cities and municipalities globally have already experimented with small-scale UBI (you can read the full list here):

So if the tech community were to come together and propose harnessing the benefits of economic transition from AI to create a national UBI program in the US, what would that look like?

What are the policy, technical, and societal components that are required for a baseline UBI program?

And what are some of the products that would enable a well-functioning UBI system, such as real-time verifiable credentials, real-time payments for disbursements and collections, healthcare data access, etc.?

The policy and tech implementation specifics are beyond my ability and expertise to recommend. But these are incredibly important questions, and the time is now to begin thinking critically about them.

This piece would not have been possible without the thoughtful feedback of Danny Crichton, Bruno Werneck, Lex Sokolin, and Mario Ruiz.



Read the full article here

Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *